diff options
author | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2013-04-27 20:10:46 -0400 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2013-04-27 20:10:46 -0400 |
commit | 885f925eef411f549f17bc64dd054a3269cf66cd (patch) | |
tree | 6bac783d573a51e497ad28c19b5a71defac85f39 | |
parent | e4f5a3adc454745fea35f1c312e14cbeba6e0ea4 (diff) | |
parent | 45c009a9a447655aecbdb06c86126f05d0272171 (diff) |
Merge branch 'pm-cpufreq'
* pm-cpufreq: (57 commits)
cpufreq: MAINTAINERS: Add co-maintainer
cpufreq: pxa2xx: initialize variables
ARM: S5pv210: compiling issue, ARM_S5PV210_CPUFREQ needs CONFIG_CPU_FREQ_TABLE=y
cpufreq: cpu0: Put cpu parent node after using it
cpufreq: ARM big LITTLE: Adapt to latest cpufreq updates
cpufreq: ARM big LITTLE: put DT nodes after using them
cpufreq: Don't call __cpufreq_governor() for drivers without target()
cpufreq: exynos5440: Protect OPP search calls with RCU lock
cpufreq: dbx500: Round to closest available freq
cpufreq: Call __cpufreq_governor() with correct policy->cpus mask
cpufreq / intel_pstate: Optimize intel_pstate_set_policy
cpufreq: OMAP: instantiate omap-cpufreq as a platform_driver
arm: exynos: Enable OPP library support for exynos5440
cpufreq: exynos: Remove error return even if no soc is found
cpufreq: exynos: Add cpufreq driver for exynos5440
cpufreq: AMD "frequency sensitivity feedback" powersave bias for ondemand governor
cpufreq: ondemand: allow custom powersave_bias_target handler to be registered
cpufreq: convert cpufreq_driver to using RCU
cpufreq: powerpc/platforms/cell: move cpufreq driver to drivers/cpufreq
cpufreq: sparc: move cpufreq driver to drivers/cpufreq
...
Conflicts:
MAINTAINERS (with commit a8e39c3 from pm-cpuidle)
drivers/cpufreq/cpufreq_governor.h (with commit beb0ff3)
122 files changed, 2770 insertions, 1240 deletions
diff --git a/Documentation/cpu-freq/cpu-drivers.txt b/Documentation/cpu-freq/cpu-drivers.txt index 72f70b16d299..a3585eac83b6 100644 --- a/Documentation/cpu-freq/cpu-drivers.txt +++ b/Documentation/cpu-freq/cpu-drivers.txt | |||
@@ -108,8 +108,9 @@ policy->governor must contain the "default policy" for | |||
108 | cpufreq_driver.target is called with | 108 | cpufreq_driver.target is called with |
109 | these values. | 109 | these values. |
110 | 110 | ||
111 | For setting some of these values, the frequency table helpers might be | 111 | For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the |
112 | helpful. See the section 2 for more information on them. | 112 | frequency table helpers might be helpful. See the section 2 for more information |
113 | on them. | ||
113 | 114 | ||
114 | SMP systems normally have same clock source for a group of cpus. For these the | 115 | SMP systems normally have same clock source for a group of cpus. For these the |
115 | .init() would be called only once for the first online cpu. Here the .init() | 116 | .init() would be called only once for the first online cpu. Here the .init() |
@@ -184,10 +185,10 @@ the reference implementation in drivers/cpufreq/longrun.c | |||
184 | As most cpufreq processors only allow for being set to a few specific | 185 | As most cpufreq processors only allow for being set to a few specific |
185 | frequencies, a "frequency table" with some functions might assist in | 186 | frequencies, a "frequency table" with some functions might assist in |
186 | some work of the processor driver. Such a "frequency table" consists | 187 | some work of the processor driver. Such a "frequency table" consists |
187 | of an array of struct cpufreq_freq_table entries, with any value in | 188 | of an array of struct cpufreq_frequency_table entries, with any value in |
188 | "index" you want to use, and the corresponding frequency in | 189 | "index" you want to use, and the corresponding frequency in |
189 | "frequency". At the end of the table, you need to add a | 190 | "frequency". At the end of the table, you need to add a |
190 | cpufreq_freq_table entry with frequency set to CPUFREQ_TABLE_END. And | 191 | cpufreq_frequency_table entry with frequency set to CPUFREQ_TABLE_END. And |
191 | if you want to skip one entry in the table, set the frequency to | 192 | if you want to skip one entry in the table, set the frequency to |
192 | CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending | 193 | CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending |
193 | order. | 194 | order. |
diff --git a/Documentation/cpu-freq/governors.txt b/Documentation/cpu-freq/governors.txt index c7a2eb8450c2..66f9cc310686 100644 --- a/Documentation/cpu-freq/governors.txt +++ b/Documentation/cpu-freq/governors.txt | |||
@@ -167,6 +167,27 @@ of load evaluation and helping the CPU stay at its top speed when truly | |||
167 | busy, rather than shifting back and forth in speed. This tunable has no | 167 | busy, rather than shifting back and forth in speed. This tunable has no |
168 | effect on behavior at lower speeds/lower CPU loads. | 168 | effect on behavior at lower speeds/lower CPU loads. |
169 | 169 | ||
170 | powersave_bias: this parameter takes a value between 0 to 1000. It | ||
171 | defines the percentage (times 10) value of the target frequency that | ||
172 | will be shaved off of the target. For example, when set to 100 -- 10%, | ||
173 | when ondemand governor would have targeted 1000 MHz, it will target | ||
174 | 1000 MHz - (10% of 1000 MHz) = 900 MHz instead. This is set to 0 | ||
175 | (disabled) by default. | ||
176 | When AMD frequency sensitivity powersave bias driver -- | ||
177 | drivers/cpufreq/amd_freq_sensitivity.c is loaded, this parameter | ||
178 | defines the workload frequency sensitivity threshold in which a lower | ||
179 | frequency is chosen instead of ondemand governor's original target. | ||
180 | The frequency sensitivity is a hardware reported (on AMD Family 16h | ||
181 | Processors and above) value between 0 to 100% that tells software how | ||
182 | the performance of the workload running on a CPU will change when | ||
183 | frequency changes. A workload with sensitivity of 0% (memory/IO-bound) | ||
184 | will not perform any better on higher core frequency, whereas a | ||
185 | workload with sensitivity of 100% (CPU-bound) will perform better | ||
186 | higher the frequency. When the driver is loaded, this is set to 400 | ||
187 | by default -- for CPUs running workloads with sensitivity value below | ||
188 | 40%, a lower frequency is chosen. Unloading the driver or writing 0 | ||
189 | will disable this feature. | ||
190 | |||
170 | 191 | ||
171 | 2.5 Conservative | 192 | 2.5 Conservative |
172 | ---------------- | 193 | ---------------- |
@@ -191,6 +212,12 @@ governor but for the opposite direction. For example when set to its | |||
191 | default value of '20' it means that if the CPU usage needs to be below | 212 | default value of '20' it means that if the CPU usage needs to be below |
192 | 20% between samples to have the frequency decreased. | 213 | 20% between samples to have the frequency decreased. |
193 | 214 | ||
215 | sampling_down_factor: similar functionality as in "ondemand" governor. | ||
216 | But in "conservative", it controls the rate at which the kernel makes | ||
217 | a decision on when to decrease the frequency while running in any | ||
218 | speed. Load for frequency increase is still evaluated every | ||
219 | sampling rate. | ||
220 | |||
194 | 3. The Governor Interface in the CPUfreq Core | 221 | 3. The Governor Interface in the CPUfreq Core |
195 | ============================================= | 222 | ============================================= |
196 | 223 | ||
diff --git a/Documentation/devicetree/bindings/cpufreq/arm_big_little_dt.txt b/Documentation/devicetree/bindings/cpufreq/arm_big_little_dt.txt new file mode 100644 index 000000000000..0715695e94a9 --- /dev/null +++ b/Documentation/devicetree/bindings/cpufreq/arm_big_little_dt.txt | |||
@@ -0,0 +1,65 @@ | |||
1 | Generic ARM big LITTLE cpufreq driver's DT glue | ||
2 | ----------------------------------------------- | ||
3 | |||
4 | This is DT specific glue layer for generic cpufreq driver for big LITTLE | ||
5 | systems. | ||
6 | |||
7 | Both required and optional properties listed below must be defined | ||
8 | under node /cpus/cpu@x. Where x is the first cpu inside a cluster. | ||
9 | |||
10 | FIXME: Cpus should boot in the order specified in DT and all cpus for a cluster | ||
11 | must be present contiguously. Generic DT driver will check only node 'x' for | ||
12 | cpu:x. | ||
13 | |||
14 | Required properties: | ||
15 | - operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt | ||
16 | for details | ||
17 | |||
18 | Optional properties: | ||
19 | - clock-latency: Specify the possible maximum transition latency for clock, | ||
20 | in unit of nanoseconds. | ||
21 | |||
22 | Examples: | ||
23 | |||
24 | cpus { | ||
25 | #address-cells = <1>; | ||
26 | #size-cells = <0>; | ||
27 | |||
28 | cpu@0 { | ||
29 | compatible = "arm,cortex-a15"; | ||
30 | reg = <0>; | ||
31 | next-level-cache = <&L2>; | ||
32 | operating-points = < | ||
33 | /* kHz uV */ | ||
34 | 792000 1100000 | ||
35 | 396000 950000 | ||
36 | 198000 850000 | ||
37 | >; | ||
38 | clock-latency = <61036>; /* two CLK32 periods */ | ||
39 | }; | ||
40 | |||
41 | cpu@1 { | ||
42 | compatible = "arm,cortex-a15"; | ||
43 | reg = <1>; | ||
44 | next-level-cache = <&L2>; | ||
45 | }; | ||
46 | |||
47 | cpu@100 { | ||
48 | compatible = "arm,cortex-a7"; | ||
49 | reg = <100>; | ||
50 | next-level-cache = <&L2>; | ||
51 | operating-points = < | ||
52 | /* kHz uV */ | ||
53 | 792000 950000 | ||
54 | 396000 750000 | ||
55 | 198000 450000 | ||
56 | >; | ||
57 | clock-latency = <61036>; /* two CLK32 periods */ | ||
58 | }; | ||
59 | |||
60 | cpu@101 { | ||
61 | compatible = "arm,cortex-a7"; | ||
62 | reg = <101>; | ||
63 | next-level-cache = <&L2>; | ||
64 | }; | ||
65 | }; | ||
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt index 4416ccc33472..051f764bedb8 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt | |||
@@ -32,7 +32,7 @@ cpus { | |||
32 | 396000 950000 | 32 | 396000 950000 |
33 | 198000 850000 | 33 | 198000 850000 |
34 | >; | 34 | >; |
35 | transition-latency = <61036>; /* two CLK32 periods */ | 35 | clock-latency = <61036>; /* two CLK32 periods */ |
36 | }; | 36 | }; |
37 | 37 | ||
38 | cpu@1 { | 38 | cpu@1 { |
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-exynos5440.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-exynos5440.txt new file mode 100644 index 000000000000..caff1a57436f --- /dev/null +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-exynos5440.txt | |||
@@ -0,0 +1,28 @@ | |||
1 | |||
2 | Exynos5440 cpufreq driver | ||
3 | ------------------- | ||
4 | |||
5 | Exynos5440 SoC cpufreq driver for CPU frequency scaling. | ||
6 | |||
7 | Required properties: | ||
8 | - interrupts: Interrupt to know the completion of cpu frequency change. | ||
9 | - operating-points: Table of frequencies and voltage CPU could be transitioned into, | ||
10 | in the decreasing order. Frequency should be in KHz units and voltage | ||
11 | should be in microvolts. | ||
12 | |||
13 | Optional properties: | ||
14 | - clock-latency: Clock monitor latency in microsecond. | ||
15 | |||
16 | All the required listed above must be defined under node cpufreq. | ||
17 | |||
18 | Example: | ||
19 | -------- | ||
20 | cpufreq@160000 { | ||
21 | compatible = "samsung,exynos5440-cpufreq"; | ||
22 | reg = <0x160000 0x1000>; | ||
23 | interrupts = <0 57 0>; | ||
24 | operating-points = < | ||
25 | 1000000 975000 | ||
26 | 800000 925000>; | ||
27 | clock-latency = <100000>; | ||
28 | }; | ||
diff --git a/MAINTAINERS b/MAINTAINERS index 1bef08d407f3..8faa946d0c10 100644 --- a/MAINTAINERS +++ b/MAINTAINERS | |||
@@ -2200,12 +2200,25 @@ F: drivers/net/ethernet/ti/cpmac.c | |||
2200 | 2200 | ||
2201 | CPU FREQUENCY DRIVERS | 2201 | CPU FREQUENCY DRIVERS |
2202 | M: Rafael J. Wysocki <rjw@sisk.pl> | 2202 | M: Rafael J. Wysocki <rjw@sisk.pl> |
2203 | M: Viresh Kumar <viresh.kumar@linaro.org> | ||
2203 | L: cpufreq@vger.kernel.org | 2204 | L: cpufreq@vger.kernel.org |
2204 | L: linux-pm@vger.kernel.org | 2205 | L: linux-pm@vger.kernel.org |
2205 | S: Maintained | 2206 | S: Maintained |
2207 | T: git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git | ||
2206 | F: drivers/cpufreq/ | 2208 | F: drivers/cpufreq/ |
2207 | F: include/linux/cpufreq.h | 2209 | F: include/linux/cpufreq.h |
2208 | 2210 | ||
2211 | CPU FREQUENCY DRIVERS - ARM BIG LITTLE | ||
2212 | M: Viresh Kumar <viresh.kumar@linaro.org> | ||
2213 | M: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> | ||
2214 | L: cpufreq@vger.kernel.org | ||
2215 | L: linux-pm@vger.kernel.org | ||
2216 | W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php | ||
2217 | S: Maintained | ||
2218 | F: drivers/cpufreq/arm_big_little.h | ||
2219 | F: drivers/cpufreq/arm_big_little.c | ||
2220 | F: drivers/cpufreq/arm_big_little_dt.c | ||
2221 | |||
2209 | CPUIDLE DRIVERS | 2222 | CPUIDLE DRIVERS |
2210 | M: Rafael J. Wysocki <rjw@sisk.pl> | 2223 | M: Rafael J. Wysocki <rjw@sisk.pl> |
2211 | M: Daniel Lezcano <daniel.lezcano@linaro.org> | 2224 | M: Daniel Lezcano <daniel.lezcano@linaro.org> |
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1cacda426a0e..cdb83225f3ea 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig | |||
@@ -2160,7 +2160,6 @@ endmenu | |||
2160 | menu "CPU Power Management" | 2160 | menu "CPU Power Management" |
2161 | 2161 | ||
2162 | if ARCH_HAS_CPUFREQ | 2162 | if ARCH_HAS_CPUFREQ |
2163 | |||
2164 | source "drivers/cpufreq/Kconfig" | 2163 | source "drivers/cpufreq/Kconfig" |
2165 | 2164 | ||
2166 | config CPU_FREQ_IMX | 2165 | config CPU_FREQ_IMX |
@@ -2170,30 +2169,6 @@ config CPU_FREQ_IMX | |||
2170 | help | 2169 | help |
2171 | This enables the CPUfreq driver for i.MX CPUs. | 2170 | This enables the CPUfreq driver for i.MX CPUs. |
2172 | 2171 | ||
2173 | config CPU_FREQ_SA1100 | ||
2174 | bool | ||
2175 | |||
2176 | config CPU_FREQ_SA1110 | ||
2177 | bool | ||
2178 | |||
2179 | config CPU_FREQ_INTEGRATOR | ||
2180 | tristate "CPUfreq driver for ARM Integrator CPUs" | ||
2181 | depends on ARCH_INTEGRATOR && CPU_FREQ | ||
2182 | default y | ||
2183 | help | ||
2184 | This enables the CPUfreq driver for ARM Integrator CPUs. | ||
2185 | |||
2186 | For details, take a look at <file:Documentation/cpu-freq>. | ||
2187 | |||
2188 | If in doubt, say Y. | ||
2189 | |||
2190 | config CPU_FREQ_PXA | ||
2191 | bool | ||
2192 | depends on CPU_FREQ && ARCH_PXA && PXA25x | ||
2193 | default y | ||
2194 | select CPU_FREQ_DEFAULT_GOV_USERSPACE | ||
2195 | select CPU_FREQ_TABLE | ||
2196 | |||
2197 | config CPU_FREQ_S3C | 2172 | config CPU_FREQ_S3C |
2198 | bool | 2173 | bool |
2199 | help | 2174 | help |
diff --git a/arch/arm/mach-davinci/Makefile b/arch/arm/mach-davinci/Makefile index fb5c1aa98a63..dd1ffccc75e9 100644 --- a/arch/arm/mach-davinci/Makefile +++ b/arch/arm/mach-davinci/Makefile | |||
@@ -37,7 +37,6 @@ obj-$(CONFIG_MACH_MITYOMAPL138) += board-mityomapl138.o | |||
37 | obj-$(CONFIG_MACH_OMAPL138_HAWKBOARD) += board-omapl138-hawk.o | 37 | obj-$(CONFIG_MACH_OMAPL138_HAWKBOARD) += board-omapl138-hawk.o |
38 | 38 | ||
39 | # Power Management | 39 | # Power Management |
40 | obj-$(CONFIG_CPU_FREQ) += cpufreq.o | ||
41 | obj-$(CONFIG_CPU_IDLE) += cpuidle.o | 40 | obj-$(CONFIG_CPU_IDLE) += cpuidle.o |
42 | obj-$(CONFIG_SUSPEND) += pm.o sleep.o | 41 | obj-$(CONFIG_SUSPEND) += pm.o sleep.o |
43 | obj-$(CONFIG_HAVE_CLK) += pm_domain.o | 42 | obj-$(CONFIG_HAVE_CLK) += pm_domain.o |
diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig index 70f94c87479d..d5dde0727339 100644 --- a/arch/arm/mach-exynos/Kconfig +++ b/arch/arm/mach-exynos/Kconfig | |||
@@ -72,10 +72,12 @@ config SOC_EXYNOS5440 | |||
72 | bool "SAMSUNG EXYNOS5440" | 72 | bool "SAMSUNG EXYNOS5440" |
73 | default y | 73 | default y |
74 | depends on ARCH_EXYNOS5 | 74 | depends on ARCH_EXYNOS5 |
75 | select ARCH_HAS_OPP | ||
75 | select ARM_ARCH_TIMER | 76 | select ARM_ARCH_TIMER |
76 | select AUTO_ZRELADDR | 77 | select AUTO_ZRELADDR |
77 | select PINCTRL | 78 | select PINCTRL |
78 | select PINCTRL_EXYNOS5440 | 79 | select PINCTRL_EXYNOS5440 |
80 | select PM_OPP | ||
79 | help | 81 | help |
80 | Enable EXYNOS5440 SoC support | 82 | Enable EXYNOS5440 SoC support |
81 | 83 | ||
diff --git a/arch/arm/mach-imx/cpufreq.c b/arch/arm/mach-imx/cpufreq.c index d8c75c3c925d..387dc4cceca2 100644 --- a/arch/arm/mach-imx/cpufreq.c +++ b/arch/arm/mach-imx/cpufreq.c | |||
@@ -87,13 +87,12 @@ static int mxc_set_target(struct cpufreq_policy *policy, | |||
87 | 87 | ||
88 | freqs.old = clk_get_rate(cpu_clk) / 1000; | 88 | freqs.old = clk_get_rate(cpu_clk) / 1000; |
89 | freqs.new = freq_Hz / 1000; | 89 | freqs.new = freq_Hz / 1000; |
90 | freqs.cpu = 0; | ||
91 | freqs.flags = 0; | 90 | freqs.flags = 0; |
92 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 91 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
93 | 92 | ||
94 | ret = set_cpu_freq(freq_Hz); | 93 | ret = set_cpu_freq(freq_Hz); |
95 | 94 | ||
96 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 95 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
97 | 96 | ||
98 | return ret; | 97 | return ret; |
99 | } | 98 | } |
@@ -145,14 +144,11 @@ static int mxc_cpufreq_init(struct cpufreq_policy *policy) | |||
145 | imx_freq_table[i].frequency = CPUFREQ_TABLE_END; | 144 | imx_freq_table[i].frequency = CPUFREQ_TABLE_END; |
146 | 145 | ||
147 | policy->cur = clk_get_rate(cpu_clk) / 1000; | 146 | policy->cur = clk_get_rate(cpu_clk) / 1000; |
148 | policy->min = policy->cpuinfo.min_freq = cpu_freq_khz_min; | ||
149 | policy->max = policy->cpuinfo.max_freq = cpu_freq_khz_max; | ||
150 | 147 | ||
151 | /* Manual states, that PLL stabilizes in two CLK32 periods */ | 148 | /* Manual states, that PLL stabilizes in two CLK32 periods */ |
152 | policy->cpuinfo.transition_latency = 2 * NANOSECOND / CLK32_FREQ; | 149 | policy->cpuinfo.transition_latency = 2 * NANOSECOND / CLK32_FREQ; |
153 | 150 | ||
154 | ret = cpufreq_frequency_table_cpuinfo(policy, imx_freq_table); | 151 | ret = cpufreq_frequency_table_cpuinfo(policy, imx_freq_table); |
155 | |||
156 | if (ret < 0) { | 152 | if (ret < 0) { |
157 | printk(KERN_ERR "%s: failed to register i.MXC CPUfreq with error code %d\n", | 153 | printk(KERN_ERR "%s: failed to register i.MXC CPUfreq with error code %d\n", |
158 | __func__, ret); | 154 | __func__, ret); |
diff --git a/arch/arm/mach-integrator/Makefile b/arch/arm/mach-integrator/Makefile index 5521d18bf19a..d14d6b76f4c2 100644 --- a/arch/arm/mach-integrator/Makefile +++ b/arch/arm/mach-integrator/Makefile | |||
@@ -9,5 +9,4 @@ obj-$(CONFIG_ARCH_INTEGRATOR_AP) += integrator_ap.o | |||
9 | obj-$(CONFIG_ARCH_INTEGRATOR_CP) += integrator_cp.o | 9 | obj-$(CONFIG_ARCH_INTEGRATOR_CP) += integrator_cp.o |
10 | 10 | ||
11 | obj-$(CONFIG_PCI) += pci_v3.o pci.o | 11 | obj-$(CONFIG_PCI) += pci_v3.o pci.o |
12 | obj-$(CONFIG_CPU_FREQ_INTEGRATOR) += cpu.o | ||
13 | obj-$(CONFIG_INTEGRATOR_IMPD1) += impd1.o | 12 | obj-$(CONFIG_INTEGRATOR_IMPD1) += impd1.o |
diff --git a/arch/arm/mach-omap2/pm.c b/arch/arm/mach-omap2/pm.c index 673a4c1d1d76..8d15f9ae19ff 100644 --- a/arch/arm/mach-omap2/pm.c +++ b/arch/arm/mach-omap2/pm.c | |||
@@ -265,6 +265,12 @@ static void __init omap4_init_voltages(void) | |||
265 | omap2_set_init_voltage("iva", "dpll_iva_m5x2_ck", "iva"); | 265 | omap2_set_init_voltage("iva", "dpll_iva_m5x2_ck", "iva"); |
266 | } | 266 | } |
267 | 267 | ||
268 | static inline void omap_init_cpufreq(void) | ||
269 | { | ||
270 | struct platform_device_info devinfo = { .name = "omap-cpufreq", }; | ||
271 | platform_device_register_full(&devinfo); | ||
272 | } | ||
273 | |||
268 | static int __init omap2_common_pm_init(void) | 274 | static int __init omap2_common_pm_init(void) |
269 | { | 275 | { |
270 | if (!of_have_populated_dt()) | 276 | if (!of_have_populated_dt()) |
@@ -294,6 +300,9 @@ int __init omap2_common_pm_late_init(void) | |||
294 | 300 | ||
295 | /* Smartreflex device init */ | 301 | /* Smartreflex device init */ |
296 | omap_devinit_smartreflex(); | 302 | omap_devinit_smartreflex(); |
303 | |||
304 | /* cpufreq dummy device instantiation */ | ||
305 | omap_init_cpufreq(); | ||
297 | } | 306 | } |
298 | 307 | ||
299 | #ifdef CONFIG_SUSPEND | 308 | #ifdef CONFIG_SUSPEND |
diff --git a/arch/arm/mach-pxa/Makefile b/arch/arm/mach-pxa/Makefile index 12c500558387..648867a8caa8 100644 --- a/arch/arm/mach-pxa/Makefile +++ b/arch/arm/mach-pxa/Makefile | |||
@@ -7,12 +7,6 @@ obj-y += clock.o devices.o generic.o irq.o \ | |||
7 | time.o reset.o | 7 | time.o reset.o |
8 | obj-$(CONFIG_PM) += pm.o sleep.o standby.o | 8 | obj-$(CONFIG_PM) += pm.o sleep.o standby.o |
9 | 9 | ||
10 | ifeq ($(CONFIG_CPU_FREQ),y) | ||
11 | obj-$(CONFIG_PXA25x) += cpufreq-pxa2xx.o | ||
12 | obj-$(CONFIG_PXA27x) += cpufreq-pxa2xx.o | ||
13 | obj-$(CONFIG_PXA3xx) += cpufreq-pxa3xx.o | ||
14 | endif | ||
15 | |||
16 | # Generic drivers that other drivers may depend upon | 10 | # Generic drivers that other drivers may depend upon |
17 | 11 | ||
18 | # SoC-specific code | 12 | # SoC-specific code |
diff --git a/arch/arm/mach-pxa/include/mach/generic.h b/arch/arm/mach-pxa/include/mach/generic.h new file mode 100644 index 000000000000..665542e0c9e2 --- /dev/null +++ b/arch/arm/mach-pxa/include/mach/generic.h | |||
@@ -0,0 +1 @@ | |||
#include "../../generic.h" | |||
diff --git a/arch/arm/mach-s3c24xx/cpufreq.c b/arch/arm/mach-s3c24xx/cpufreq.c index 5f181e733eee..3c0e78ede0da 100644 --- a/arch/arm/mach-s3c24xx/cpufreq.c +++ b/arch/arm/mach-s3c24xx/cpufreq.c | |||
@@ -204,7 +204,6 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy, | |||
204 | freqs.old = cpu_cur.freq; | 204 | freqs.old = cpu_cur.freq; |
205 | freqs.new = cpu_new.freq; | 205 | freqs.new = cpu_new.freq; |
206 | 206 | ||
207 | freqs.freqs.cpu = 0; | ||
208 | freqs.freqs.old = cpu_cur.freq.armclk / 1000; | 207 | freqs.freqs.old = cpu_cur.freq.armclk / 1000; |
209 | freqs.freqs.new = cpu_new.freq.armclk / 1000; | 208 | freqs.freqs.new = cpu_new.freq.armclk / 1000; |
210 | 209 | ||
@@ -218,9 +217,7 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy, | |||
218 | s3c_cpufreq_updateclk(clk_pclk, cpu_new.freq.pclk); | 217 | s3c_cpufreq_updateclk(clk_pclk, cpu_new.freq.pclk); |
219 | 218 | ||
220 | /* start the frequency change */ | 219 | /* start the frequency change */ |
221 | 220 | cpufreq_notify_transition(policy, &freqs.freqs, CPUFREQ_PRECHANGE); | |
222 | if (policy) | ||
223 | cpufreq_notify_transition(&freqs.freqs, CPUFREQ_PRECHANGE); | ||
224 | 221 | ||
225 | /* If hclk is staying the same, then we do not need to | 222 | /* If hclk is staying the same, then we do not need to |
226 | * re-write the IO or the refresh timings whilst we are changing | 223 | * re-write the IO or the refresh timings whilst we are changing |
@@ -264,8 +261,7 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy, | |||
264 | local_irq_restore(flags); | 261 | local_irq_restore(flags); |
265 | 262 | ||
266 | /* notify everyone we've done this */ | 263 | /* notify everyone we've done this */ |
267 | if (policy) | 264 | cpufreq_notify_transition(policy, &freqs.freqs, CPUFREQ_POSTCHANGE); |
268 | cpufreq_notify_transition(&freqs.freqs, CPUFREQ_POSTCHANGE); | ||
269 | 265 | ||
270 | s3c_freq_dbg("%s: finished\n", __func__); | 266 | s3c_freq_dbg("%s: finished\n", __func__); |
271 | return 0; | 267 | return 0; |
diff --git a/arch/arm/mach-sa1100/Kconfig b/arch/arm/mach-sa1100/Kconfig index ca14dbdcfb22..04f9784ff0ed 100644 --- a/arch/arm/mach-sa1100/Kconfig +++ b/arch/arm/mach-sa1100/Kconfig | |||
@@ -4,7 +4,7 @@ menu "SA11x0 Implementations" | |||
4 | 4 | ||
5 | config SA1100_ASSABET | 5 | config SA1100_ASSABET |
6 | bool "Assabet" | 6 | bool "Assabet" |
7 | select CPU_FREQ_SA1110 | 7 | select ARM_SA1110_CPUFREQ |
8 | help | 8 | help |
9 | Say Y here if you are using the Intel(R) StrongARM(R) SA-1110 | 9 | Say Y here if you are using the Intel(R) StrongARM(R) SA-1110 |
10 | Microprocessor Development Board (also known as the Assabet). | 10 | Microprocessor Development Board (also known as the Assabet). |
@@ -20,7 +20,7 @@ config ASSABET_NEPONSET | |||
20 | 20 | ||
21 | config SA1100_CERF | 21 | config SA1100_CERF |
22 | bool "CerfBoard" | 22 | bool "CerfBoard" |
23 | select CPU_FREQ_SA1110 | 23 | select ARM_SA1110_CPUFREQ |
24 | help | 24 | help |
25 | The Intrinsyc CerfBoard is based on the StrongARM 1110 (Discontinued). | 25 | The Intrinsyc CerfBoard is based on the StrongARM 1110 (Discontinued). |
26 | More information is available at: | 26 | More information is available at: |
@@ -47,7 +47,7 @@ endchoice | |||
47 | 47 | ||
48 | config SA1100_COLLIE | 48 | config SA1100_COLLIE |
49 | bool "Sharp Zaurus SL5500" | 49 | bool "Sharp Zaurus SL5500" |
50 | # FIXME: select CPU_FREQ_SA11x0 | 50 | # FIXME: select ARM_SA11x0_CPUFREQ |
51 | select SHARP_LOCOMO | 51 | select SHARP_LOCOMO |
52 | select SHARP_PARAM | 52 | select SHARP_PARAM |
53 | select SHARP_SCOOP | 53 | select SHARP_SCOOP |
@@ -56,7 +56,7 @@ config SA1100_COLLIE | |||
56 | 56 | ||
57 | config SA1100_H3100 | 57 | config SA1100_H3100 |
58 | bool "Compaq iPAQ H3100" | 58 | bool "Compaq iPAQ H3100" |
59 | select CPU_FREQ_SA1110 | 59 | select ARM_SA1110_CPUFREQ |
60 | select HTC_EGPIO | 60 | select HTC_EGPIO |
61 | help | 61 | help |
62 | Say Y here if you intend to run this kernel on the Compaq iPAQ | 62 | Say Y here if you intend to run this kernel on the Compaq iPAQ |
@@ -67,7 +67,7 @@ config SA1100_H3100 | |||
67 | 67 | ||
68 | config SA1100_H3600 | 68 | config SA1100_H3600 |
69 | bool "Compaq iPAQ H3600/H3700" | 69 | bool "Compaq iPAQ H3600/H3700" |
70 | select CPU_FREQ_SA1110 | 70 | select ARM_SA1110_CPUFREQ |
71 | select HTC_EGPIO | 71 | select HTC_EGPIO |
72 | help | 72 | help |
73 | Say Y here if you intend to run this kernel on the Compaq iPAQ | 73 | Say Y here if you intend to run this kernel on the Compaq iPAQ |
@@ -78,7 +78,7 @@ config SA1100_H3600 | |||
78 | 78 | ||
79 | config SA1100_BADGE4 | 79 | config SA1100_BADGE4 |
80 | bool "HP Labs BadgePAD 4" | 80 | bool "HP Labs BadgePAD 4" |
81 | select CPU_FREQ_SA1100 | 81 | select ARM_SA1100_CPUFREQ |
82 | select SA1111 | 82 | select SA1111 |
83 | help | 83 | help |
84 | Say Y here if you want to build a kernel for the HP Laboratories | 84 | Say Y here if you want to build a kernel for the HP Laboratories |
@@ -86,7 +86,7 @@ config SA1100_BADGE4 | |||
86 | 86 | ||
87 | config SA1100_JORNADA720 | 87 | config SA1100_JORNADA720 |
88 | bool "HP Jornada 720" | 88 | bool "HP Jornada 720" |
89 | # FIXME: select CPU_FREQ_SA11x0 | 89 | # FIXME: select ARM_SA11x0_CPUFREQ |
90 | select SA1111 | 90 | select SA1111 |
91 | help | 91 | help |
92 | Say Y here if you want to build a kernel for the HP Jornada 720 | 92 | Say Y here if you want to build a kernel for the HP Jornada 720 |
@@ -105,14 +105,14 @@ config SA1100_JORNADA720_SSP | |||
105 | 105 | ||
106 | config SA1100_HACKKIT | 106 | config SA1100_HACKKIT |
107 | bool "HackKit Core CPU Board" | 107 | bool "HackKit Core CPU Board" |
108 | select CPU_FREQ_SA1100 | 108 | select ARM_SA1100_CPUFREQ |
109 | help | 109 | help |
110 | Say Y here to support the HackKit Core CPU Board | 110 | Say Y here to support the HackKit Core CPU Board |
111 | <http://hackkit.eletztrick.de>; | 111 | <http://hackkit.eletztrick.de>; |
112 | 112 | ||
113 | config SA1100_LART | 113 | config SA1100_LART |
114 | bool "LART" | 114 | bool "LART" |
115 | select CPU_FREQ_SA1100 | 115 | select ARM_SA1100_CPUFREQ |
116 | help | 116 | help |
117 | Say Y here if you are using the Linux Advanced Radio Terminal | 117 | Say Y here if you are using the Linux Advanced Radio Terminal |
118 | (also known as the LART). See <http://www.lartmaker.nl/> for | 118 | (also known as the LART). See <http://www.lartmaker.nl/> for |
@@ -120,7 +120,7 @@ config SA1100_LART | |||
120 | 120 | ||
121 | config SA1100_NANOENGINE | 121 | config SA1100_NANOENGINE |
122 | bool "nanoEngine" | 122 | bool "nanoEngine" |
123 | select CPU_FREQ_SA1110 | 123 | select ARM_SA1110_CPUFREQ |
124 | select PCI | 124 | select PCI |
125 | select PCI_NANOENGINE | 125 | select PCI_NANOENGINE |
126 | help | 126 | help |
@@ -130,7 +130,7 @@ config SA1100_NANOENGINE | |||
130 | 130 | ||
131 | config SA1100_PLEB | 131 | config SA1100_PLEB |
132 | bool "PLEB" | 132 | bool "PLEB" |
133 | select CPU_FREQ_SA1100 | 133 | select ARM_SA1100_CPUFREQ |
134 | help | 134 | help |
135 | Say Y here if you are using version 1 of the Portable Linux | 135 | Say Y here if you are using version 1 of the Portable Linux |
136 | Embedded Board (also known as PLEB). | 136 | Embedded Board (also known as PLEB). |
@@ -139,7 +139,7 @@ config SA1100_PLEB | |||
139 | 139 | ||
140 | config SA1100_SHANNON | 140 | config SA1100_SHANNON |
141 | bool "Shannon" | 141 | bool "Shannon" |
142 | select CPU_FREQ_SA1100 | 142 | select ARM_SA1100_CPUFREQ |
143 | help | 143 | help |
144 | The Shannon (also known as a Tuxscreen, and also as a IS2630) was a | 144 | The Shannon (also known as a Tuxscreen, and also as a IS2630) was a |
145 | limited edition webphone produced by Philips. The Shannon is a SA1100 | 145 | limited edition webphone produced by Philips. The Shannon is a SA1100 |
@@ -148,7 +148,7 @@ config SA1100_SHANNON | |||
148 | 148 | ||
149 | config SA1100_SIMPAD | 149 | config SA1100_SIMPAD |
150 | bool "Simpad" | 150 | bool "Simpad" |
151 | select CPU_FREQ_SA1110 | 151 | select ARM_SA1110_CPUFREQ |
152 | help | 152 | help |
153 | The SIEMENS webpad SIMpad is based on the StrongARM 1110. There | 153 | The SIEMENS webpad SIMpad is based on the StrongARM 1110. There |
154 | are two different versions CL4 and SL4. CL4 has 32MB RAM and 16MB | 154 | are two different versions CL4 and SL4. CL4 has 32MB RAM and 16MB |
diff --git a/arch/arm/mach-sa1100/Makefile b/arch/arm/mach-sa1100/Makefile index 1aed9e70465d..2732eef48966 100644 --- a/arch/arm/mach-sa1100/Makefile +++ b/arch/arm/mach-sa1100/Makefile | |||
@@ -8,9 +8,6 @@ obj-m := | |||
8 | obj-n := | 8 | obj-n := |
9 | obj- := | 9 | obj- := |
10 | 10 | ||
11 | obj-$(CONFIG_CPU_FREQ_SA1100) += cpu-sa1100.o | ||
12 | obj-$(CONFIG_CPU_FREQ_SA1110) += cpu-sa1110.o | ||
13 | |||
14 | # Specific board support | 11 | # Specific board support |
15 | obj-$(CONFIG_SA1100_ASSABET) += assabet.o | 12 | obj-$(CONFIG_SA1100_ASSABET) += assabet.o |
16 | obj-$(CONFIG_ASSABET_NEPONSET) += neponset.o | 13 | obj-$(CONFIG_ASSABET_NEPONSET) += neponset.o |
diff --git a/arch/arm/mach-sa1100/include/mach/generic.h b/arch/arm/mach-sa1100/include/mach/generic.h new file mode 100644 index 000000000000..665542e0c9e2 --- /dev/null +++ b/arch/arm/mach-sa1100/include/mach/generic.h | |||
@@ -0,0 +1 @@ | |||
#include "../../generic.h" | |||
diff --git a/arch/arm/mach-tegra/Makefile b/arch/arm/mach-tegra/Makefile index f6b46ae2b7f8..09b578f9eb84 100644 --- a/arch/arm/mach-tegra/Makefile +++ b/arch/arm/mach-tegra/Makefile | |||
@@ -24,7 +24,6 @@ obj-$(CONFIG_ARCH_TEGRA_3x_SOC) += cpuidle-tegra30.o | |||
24 | endif | 24 | endif |
25 | obj-$(CONFIG_SMP) += platsmp.o headsmp.o | 25 | obj-$(CONFIG_SMP) += platsmp.o headsmp.o |
26 | obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o | 26 | obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o |
27 | obj-$(CONFIG_CPU_FREQ) += cpu-tegra.o | ||
28 | obj-$(CONFIG_TEGRA_PCI) += pcie.o | 27 | obj-$(CONFIG_TEGRA_PCI) += pcie.o |
29 | 28 | ||
30 | obj-$(CONFIG_ARCH_TEGRA_2x_SOC) += board-dt-tegra20.o | 29 | obj-$(CONFIG_ARCH_TEGRA_2x_SOC) += board-dt-tegra20.o |
diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig index c1a868d398bd..22c40308360b 100644 --- a/arch/avr32/Kconfig +++ b/arch/avr32/Kconfig | |||
@@ -250,20 +250,7 @@ config ARCH_SUSPEND_POSSIBLE | |||
250 | def_bool y | 250 | def_bool y |
251 | 251 | ||
252 | menu "CPU Frequency scaling" | 252 | menu "CPU Frequency scaling" |
253 | |||
254 | source "drivers/cpufreq/Kconfig" | 253 | source "drivers/cpufreq/Kconfig" |
255 | |||
256 | config CPU_FREQ_AT32AP | ||
257 | bool "CPU frequency driver for AT32AP" | ||
258 | depends on CPU_FREQ && PLATFORM_AT32AP | ||
259 | default n | ||
260 | help | ||
261 | This enables the CPU frequency driver for AT32AP processors. | ||
262 | |||
263 | For details, take a look in <file:Documentation/cpu-freq>. | ||
264 | |||
265 | If in doubt, say N. | ||
266 | |||
267 | endmenu | 254 | endmenu |
268 | 255 | ||
269 | endmenu | 256 | endmenu |
diff --git a/arch/avr32/configs/atngw100_defconfig b/arch/avr32/configs/atngw100_defconfig index f4025db184ff..d5aff36ade92 100644 --- a/arch/avr32/configs/atngw100_defconfig +++ b/arch/avr32/configs/atngw100_defconfig | |||
@@ -26,7 +26,7 @@ CONFIG_CPU_FREQ=y | |||
26 | # CONFIG_CPU_FREQ_STAT is not set | 26 | # CONFIG_CPU_FREQ_STAT is not set |
27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
29 | CONFIG_CPU_FREQ_AT32AP=y | 29 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atngw100_evklcd100_defconfig b/arch/avr32/configs/atngw100_evklcd100_defconfig index c76a49b9e9d0..4abcf435d599 100644 --- a/arch/avr32/configs/atngw100_evklcd100_defconfig +++ b/arch/avr32/configs/atngw100_evklcd100_defconfig | |||
@@ -28,7 +28,7 @@ CONFIG_CPU_FREQ=y | |||
28 | # CONFIG_CPU_FREQ_STAT is not set | 28 | # CONFIG_CPU_FREQ_STAT is not set |
29 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 29 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
30 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 30 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
31 | CONFIG_CPU_FREQ_AT32AP=y | 31 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
32 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 32 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
33 | CONFIG_NET=y | 33 | CONFIG_NET=y |
34 | CONFIG_PACKET=y | 34 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atngw100_evklcd101_defconfig b/arch/avr32/configs/atngw100_evklcd101_defconfig index 2d8ab089a64e..18f3fa0470ff 100644 --- a/arch/avr32/configs/atngw100_evklcd101_defconfig +++ b/arch/avr32/configs/atngw100_evklcd101_defconfig | |||
@@ -27,7 +27,7 @@ CONFIG_CPU_FREQ=y | |||
27 | # CONFIG_CPU_FREQ_STAT is not set | 27 | # CONFIG_CPU_FREQ_STAT is not set |
28 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 28 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
29 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 29 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
30 | CONFIG_CPU_FREQ_AT32AP=y | 30 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
31 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 31 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
32 | CONFIG_NET=y | 32 | CONFIG_NET=y |
33 | CONFIG_PACKET=y | 33 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atngw100_mrmt_defconfig b/arch/avr32/configs/atngw100_mrmt_defconfig index b189e0cab04b..06e389cfcd12 100644 --- a/arch/avr32/configs/atngw100_mrmt_defconfig +++ b/arch/avr32/configs/atngw100_mrmt_defconfig | |||
@@ -23,7 +23,7 @@ CONFIG_CPU_FREQ=y | |||
23 | CONFIG_CPU_FREQ_GOV_POWERSAVE=y | 23 | CONFIG_CPU_FREQ_GOV_POWERSAVE=y |
24 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 24 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
25 | CONFIG_CPU_FREQ_GOV_ONDEMAND=y | 25 | CONFIG_CPU_FREQ_GOV_ONDEMAND=y |
26 | CONFIG_CPU_FREQ_AT32AP=y | 26 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
27 | CONFIG_NET=y | 27 | CONFIG_NET=y |
28 | CONFIG_PACKET=y | 28 | CONFIG_PACKET=y |
29 | CONFIG_UNIX=y | 29 | CONFIG_UNIX=y |
diff --git a/arch/avr32/configs/atngw100mkii_defconfig b/arch/avr32/configs/atngw100mkii_defconfig index 2e4de42a53c4..2518a1368d7c 100644 --- a/arch/avr32/configs/atngw100mkii_defconfig +++ b/arch/avr32/configs/atngw100mkii_defconfig | |||
@@ -26,7 +26,7 @@ CONFIG_CPU_FREQ=y | |||
26 | # CONFIG_CPU_FREQ_STAT is not set | 26 | # CONFIG_CPU_FREQ_STAT is not set |
27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
29 | CONFIG_CPU_FREQ_AT32AP=y | 29 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atngw100mkii_evklcd100_defconfig b/arch/avr32/configs/atngw100mkii_evklcd100_defconfig index fad3cd22dfd3..245ef6bd0fa6 100644 --- a/arch/avr32/configs/atngw100mkii_evklcd100_defconfig +++ b/arch/avr32/configs/atngw100mkii_evklcd100_defconfig | |||
@@ -29,7 +29,7 @@ CONFIG_CPU_FREQ=y | |||
29 | # CONFIG_CPU_FREQ_STAT is not set | 29 | # CONFIG_CPU_FREQ_STAT is not set |
30 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 30 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
31 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 31 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
32 | CONFIG_CPU_FREQ_AT32AP=y | 32 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
33 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 33 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
34 | CONFIG_NET=y | 34 | CONFIG_NET=y |
35 | CONFIG_PACKET=y | 35 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atngw100mkii_evklcd101_defconfig b/arch/avr32/configs/atngw100mkii_evklcd101_defconfig index 29986230aaa5..fa6cbac6e418 100644 --- a/arch/avr32/configs/atngw100mkii_evklcd101_defconfig +++ b/arch/avr32/configs/atngw100mkii_evklcd101_defconfig | |||
@@ -28,7 +28,7 @@ CONFIG_CPU_FREQ=y | |||
28 | # CONFIG_CPU_FREQ_STAT is not set | 28 | # CONFIG_CPU_FREQ_STAT is not set |
29 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 29 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
30 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 30 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
31 | CONFIG_CPU_FREQ_AT32AP=y | 31 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
32 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 32 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
33 | CONFIG_NET=y | 33 | CONFIG_NET=y |
34 | CONFIG_PACKET=y | 34 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atstk1002_defconfig b/arch/avr32/configs/atstk1002_defconfig index a582465e1cef..bbd5131021a5 100644 --- a/arch/avr32/configs/atstk1002_defconfig +++ b/arch/avr32/configs/atstk1002_defconfig | |||
@@ -25,7 +25,7 @@ CONFIG_CPU_FREQ=y | |||
25 | # CONFIG_CPU_FREQ_STAT is not set | 25 | # CONFIG_CPU_FREQ_STAT is not set |
26 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 26 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
27 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 27 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
28 | CONFIG_CPU_FREQ_AT32AP=y | 28 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
29 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 29 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
30 | CONFIG_NET=y | 30 | CONFIG_NET=y |
31 | CONFIG_PACKET=y | 31 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atstk1003_defconfig b/arch/avr32/configs/atstk1003_defconfig index 57a79df2ce5d..c1cd726f9012 100644 --- a/arch/avr32/configs/atstk1003_defconfig +++ b/arch/avr32/configs/atstk1003_defconfig | |||
@@ -26,7 +26,7 @@ CONFIG_CPU_FREQ=y | |||
26 | # CONFIG_CPU_FREQ_STAT is not set | 26 | # CONFIG_CPU_FREQ_STAT is not set |
27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
29 | CONFIG_CPU_FREQ_AT32AP=y | 29 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atstk1004_defconfig b/arch/avr32/configs/atstk1004_defconfig index 1a49bd8c6340..754ae56b2767 100644 --- a/arch/avr32/configs/atstk1004_defconfig +++ b/arch/avr32/configs/atstk1004_defconfig | |||
@@ -26,7 +26,7 @@ CONFIG_CPU_FREQ=y | |||
26 | # CONFIG_CPU_FREQ_STAT is not set | 26 | # CONFIG_CPU_FREQ_STAT is not set |
27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
29 | CONFIG_CPU_FREQ_AT32AP=y | 29 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/atstk1006_defconfig b/arch/avr32/configs/atstk1006_defconfig index 206a1b67f763..58589d8cc0ac 100644 --- a/arch/avr32/configs/atstk1006_defconfig +++ b/arch/avr32/configs/atstk1006_defconfig | |||
@@ -26,7 +26,7 @@ CONFIG_CPU_FREQ=y | |||
26 | # CONFIG_CPU_FREQ_STAT is not set | 26 | # CONFIG_CPU_FREQ_STAT is not set |
27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 27 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 28 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
29 | CONFIG_CPU_FREQ_AT32AP=y | 29 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y | 30 | CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
diff --git a/arch/avr32/configs/favr-32_defconfig b/arch/avr32/configs/favr-32_defconfig index 0421498d666b..57788a42ff83 100644 --- a/arch/avr32/configs/favr-32_defconfig +++ b/arch/avr32/configs/favr-32_defconfig | |||
@@ -27,7 +27,7 @@ CONFIG_CPU_FREQ=y | |||
27 | # CONFIG_CPU_FREQ_STAT is not set | 27 | # CONFIG_CPU_FREQ_STAT is not set |
28 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 28 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
29 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 29 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
30 | CONFIG_CPU_FREQ_AT32AP=y | 30 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
31 | CONFIG_NET=y | 31 | CONFIG_NET=y |
32 | CONFIG_PACKET=y | 32 | CONFIG_PACKET=y |
33 | CONFIG_UNIX=y | 33 | CONFIG_UNIX=y |
diff --git a/arch/avr32/configs/hammerhead_defconfig b/arch/avr32/configs/hammerhead_defconfig index 82f24eb251bd..ba7c31e269cb 100644 --- a/arch/avr32/configs/hammerhead_defconfig +++ b/arch/avr32/configs/hammerhead_defconfig | |||
@@ -31,7 +31,7 @@ CONFIG_CPU_FREQ=y | |||
31 | # CONFIG_CPU_FREQ_STAT is not set | 31 | # CONFIG_CPU_FREQ_STAT is not set |
32 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 32 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
33 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 33 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
34 | CONFIG_CPU_FREQ_AT32AP=y | 34 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
35 | CONFIG_NET=y | 35 | CONFIG_NET=y |
36 | CONFIG_PACKET=y | 36 | CONFIG_PACKET=y |
37 | CONFIG_UNIX=y | 37 | CONFIG_UNIX=y |
diff --git a/arch/avr32/configs/mimc200_defconfig b/arch/avr32/configs/mimc200_defconfig index 1bee51f22154..0a8bfdc420e0 100644 --- a/arch/avr32/configs/mimc200_defconfig +++ b/arch/avr32/configs/mimc200_defconfig | |||
@@ -24,7 +24,7 @@ CONFIG_CPU_FREQ=y | |||
24 | # CONFIG_CPU_FREQ_STAT is not set | 24 | # CONFIG_CPU_FREQ_STAT is not set |
25 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y | 25 | CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y |
26 | CONFIG_CPU_FREQ_GOV_USERSPACE=y | 26 | CONFIG_CPU_FREQ_GOV_USERSPACE=y |
27 | CONFIG_CPU_FREQ_AT32AP=y | 27 | CONFIG_AVR32_AT32AP_CPUFREQ=y |
28 | CONFIG_NET=y | 28 | CONFIG_NET=y |
29 | CONFIG_PACKET=y | 29 | CONFIG_PACKET=y |
30 | CONFIG_UNIX=y | 30 | CONFIG_UNIX=y |
diff --git a/arch/avr32/mach-at32ap/Makefile b/arch/avr32/mach-at32ap/Makefile index 514c9a9b009a..fc09ec4bc725 100644 --- a/arch/avr32/mach-at32ap/Makefile +++ b/arch/avr32/mach-at32ap/Makefile | |||
@@ -1,7 +1,6 @@ | |||
1 | obj-y += pdc.o clock.o intc.o extint.o pio.o hsmc.o | 1 | obj-y += pdc.o clock.o intc.o extint.o pio.o hsmc.o |
2 | obj-y += hmatrix.o | 2 | obj-y += hmatrix.o |
3 | obj-$(CONFIG_CPU_AT32AP700X) += at32ap700x.o pm-at32ap700x.o | 3 | obj-$(CONFIG_CPU_AT32AP700X) += at32ap700x.o pm-at32ap700x.o |
4 | obj-$(CONFIG_CPU_FREQ_AT32AP) += cpufreq.o | ||
5 | obj-$(CONFIG_PM) += pm.o | 4 | obj-$(CONFIG_PM) += pm.o |
6 | 5 | ||
7 | ifeq ($(CONFIG_PM_DEBUG),y) | 6 | ifeq ($(CONFIG_PM_DEBUG),y) |
diff --git a/arch/blackfin/mach-common/Makefile b/arch/blackfin/mach-common/Makefile index 75f0ba29ebb9..675466d490d4 100644 --- a/arch/blackfin/mach-common/Makefile +++ b/arch/blackfin/mach-common/Makefile | |||
@@ -10,7 +10,6 @@ obj-$(CONFIG_PM) += pm.o | |||
10 | ifneq ($(CONFIG_BF60x),y) | 10 | ifneq ($(CONFIG_BF60x),y) |
11 | obj-$(CONFIG_PM) += dpmc_modes.o | 11 | obj-$(CONFIG_PM) += dpmc_modes.o |
12 | endif | 12 | endif |
13 | obj-$(CONFIG_CPU_FREQ) += cpufreq.o | ||
14 | obj-$(CONFIG_CPU_VOLTAGE) += dpmc.o | 13 | obj-$(CONFIG_CPU_VOLTAGE) += dpmc.o |
15 | obj-$(CONFIG_SMP) += smp.o | 14 | obj-$(CONFIG_SMP) += smp.o |
16 | obj-$(CONFIG_BFIN_KERNEL_CLOCK) += clocks-init.o | 15 | obj-$(CONFIG_BFIN_KERNEL_CLOCK) += clocks-init.o |
diff --git a/arch/cris/arch-v32/mach-a3/Makefile b/arch/cris/arch-v32/mach-a3/Makefile index d366e0891988..18a227196a41 100644 --- a/arch/cris/arch-v32/mach-a3/Makefile +++ b/arch/cris/arch-v32/mach-a3/Makefile | |||
@@ -3,7 +3,6 @@ | |||
3 | # | 3 | # |
4 | 4 | ||
5 | obj-y := dma.o pinmux.o io.o arbiter.o | 5 | obj-y := dma.o pinmux.o io.o arbiter.o |
6 | obj-$(CONFIG_CPU_FREQ) += cpufreq.o | ||
7 | 6 | ||
8 | clean: | 7 | clean: |
9 | 8 | ||
diff --git a/arch/cris/arch-v32/mach-fs/Makefile b/arch/cris/arch-v32/mach-fs/Makefile index d366e0891988..18a227196a41 100644 --- a/arch/cris/arch-v32/mach-fs/Makefile +++ b/arch/cris/arch-v32/mach-fs/Makefile | |||
@@ -3,7 +3,6 @@ | |||
3 | # | 3 | # |
4 | 4 | ||
5 | obj-y := dma.o pinmux.o io.o arbiter.o | 5 | obj-y := dma.o pinmux.o io.o arbiter.o |
6 | obj-$(CONFIG_CPU_FREQ) += cpufreq.o | ||
7 | 6 | ||
8 | clean: | 7 | clean: |
9 | 8 | ||
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 9a02f71c6b1f..152b5f29d048 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig | |||
@@ -591,9 +591,9 @@ source "kernel/power/Kconfig" | |||
591 | source "drivers/acpi/Kconfig" | 591 | source "drivers/acpi/Kconfig" |
592 | 592 | ||
593 | if PM | 593 | if PM |
594 | 594 | menu "CPU Frequency scaling" | |
595 | source "arch/ia64/kernel/cpufreq/Kconfig" | 595 | source "drivers/cpufreq/Kconfig" |
596 | 596 | endmenu | |
597 | endif | 597 | endif |
598 | 598 | ||
599 | endmenu | 599 | endmenu |
diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile index d959c84904be..20678a9ed11a 100644 --- a/arch/ia64/kernel/Makefile +++ b/arch/ia64/kernel/Makefile | |||
@@ -23,7 +23,6 @@ obj-$(CONFIG_SMP) += smp.o smpboot.o | |||
23 | obj-$(CONFIG_NUMA) += numa.o | 23 | obj-$(CONFIG_NUMA) += numa.o |
24 | obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o | 24 | obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o |
25 | obj-$(CONFIG_IA64_CYCLONE) += cyclone.o | 25 | obj-$(CONFIG_IA64_CYCLONE) += cyclone.o |
26 | obj-$(CONFIG_CPU_FREQ) += cpufreq/ | ||
27 | obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o | 26 | obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o |
28 | obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o | 27 | obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o |
29 | obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o | 28 | obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o |
diff --git a/arch/ia64/kernel/cpufreq/Kconfig b/arch/ia64/kernel/cpufreq/Kconfig deleted file mode 100644 index 2d9d5279b981..000000000000 --- a/arch/ia64/kernel/cpufreq/Kconfig +++ /dev/null | |||
@@ -1,29 +0,0 @@ | |||
1 | |||
2 | # | ||
3 | # CPU Frequency scaling | ||
4 | # | ||
5 | |||
6 | menu "CPU Frequency scaling" | ||
7 | |||
8 | source "drivers/cpufreq/Kconfig" | ||
9 | |||
10 | if CPU_FREQ | ||
11 | |||
12 | comment "CPUFreq processor drivers" | ||
13 | |||
14 | config IA64_ACPI_CPUFREQ | ||
15 | tristate "ACPI Processor P-States driver" | ||
16 | select CPU_FREQ_TABLE | ||
17 | depends on ACPI_PROCESSOR | ||
18 | help | ||
19 | This driver adds a CPUFreq driver which utilizes the ACPI | ||
20 | Processor Performance States. | ||
21 | |||
22 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
23 | |||
24 | If in doubt, say N. | ||
25 | |||
26 | endif # CPU_FREQ | ||
27 | |||
28 | endmenu | ||
29 | |||
diff --git a/arch/ia64/kernel/cpufreq/Makefile b/arch/ia64/kernel/cpufreq/Makefile deleted file mode 100644 index 4838f2a57c7a..000000000000 --- a/arch/ia64/kernel/cpufreq/Makefile +++ /dev/null | |||
@@ -1,2 +0,0 @@ | |||
1 | obj-$(CONFIG_IA64_ACPI_CPUFREQ) += acpi-cpufreq.o | ||
2 | |||
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 51244bf97271..26d21d3d6a98 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig | |||
@@ -2538,7 +2538,14 @@ source "kernel/power/Kconfig" | |||
2538 | 2538 | ||
2539 | endmenu | 2539 | endmenu |
2540 | 2540 | ||
2541 | source "arch/mips/kernel/cpufreq/Kconfig" | 2541 | config MIPS_EXTERNAL_TIMER |
2542 | bool | ||
2543 | |||
2544 | if CPU_SUPPORTS_CPUFREQ && MIPS_EXTERNAL_TIMER | ||
2545 | menu "CPU Power Management" | ||
2546 | source "drivers/cpufreq/Kconfig" | ||
2547 | endmenu | ||
2548 | endif | ||
2542 | 2549 | ||
2543 | source "net/Kconfig" | 2550 | source "net/Kconfig" |
2544 | 2551 | ||
diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile index de75fb50562b..520a908d45d6 100644 --- a/arch/mips/kernel/Makefile +++ b/arch/mips/kernel/Makefile | |||
@@ -92,8 +92,6 @@ CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/n | |||
92 | 92 | ||
93 | obj-$(CONFIG_HAVE_STD_PC_SERIAL_PORT) += 8250-platform.o | 93 | obj-$(CONFIG_HAVE_STD_PC_SERIAL_PORT) += 8250-platform.o |
94 | 94 | ||
95 | obj-$(CONFIG_MIPS_CPUFREQ) += cpufreq/ | ||
96 | |||
97 | obj-$(CONFIG_PERF_EVENTS) += perf_event.o | 95 | obj-$(CONFIG_PERF_EVENTS) += perf_event.o |
98 | obj-$(CONFIG_HW_PERF_EVENTS) += perf_event_mipsxx.o | 96 | obj-$(CONFIG_HW_PERF_EVENTS) += perf_event_mipsxx.o |
99 | 97 | ||
diff --git a/arch/mips/kernel/cpufreq/Kconfig b/arch/mips/kernel/cpufreq/Kconfig deleted file mode 100644 index 58c601eee6fd..000000000000 --- a/arch/mips/kernel/cpufreq/Kconfig +++ /dev/null | |||
@@ -1,41 +0,0 @@ | |||
1 | # | ||
2 | # CPU Frequency scaling | ||
3 | # | ||
4 | |||
5 | config MIPS_EXTERNAL_TIMER | ||
6 | bool | ||
7 | |||
8 | config MIPS_CPUFREQ | ||
9 | bool | ||
10 | default y | ||
11 | depends on CPU_SUPPORTS_CPUFREQ && MIPS_EXTERNAL_TIMER | ||
12 | |||
13 | if MIPS_CPUFREQ | ||
14 | |||
15 | menu "CPU Frequency scaling" | ||
16 | |||
17 | source "drivers/cpufreq/Kconfig" | ||
18 | |||
19 | if CPU_FREQ | ||
20 | |||
21 | comment "CPUFreq processor drivers" | ||
22 | |||
23 | config LOONGSON2_CPUFREQ | ||
24 | tristate "Loongson2 CPUFreq Driver" | ||
25 | select CPU_FREQ_TABLE | ||
26 | depends on MIPS_CPUFREQ | ||
27 | help | ||
28 | This option adds a CPUFreq driver for loongson processors which | ||
29 | support software configurable cpu frequency. | ||
30 | |||
31 | Loongson2F and it's successors support this feature. | ||
32 | |||
33 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
34 | |||
35 | If in doubt, say N. | ||
36 | |||
37 | endif # CPU_FREQ | ||
38 | |||
39 | endmenu | ||
40 | |||
41 | endif # MIPS_CPUFREQ | ||
diff --git a/arch/mips/kernel/cpufreq/Makefile b/arch/mips/kernel/cpufreq/Makefile deleted file mode 100644 index 05a5715ee38c..000000000000 --- a/arch/mips/kernel/cpufreq/Makefile +++ /dev/null | |||
@@ -1,5 +0,0 @@ | |||
1 | # | ||
2 | # Makefile for the Linux/MIPS cpufreq. | ||
3 | # | ||
4 | |||
5 | obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o | ||
diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig index 53aaefeb3386..9978f594cac0 100644 --- a/arch/powerpc/platforms/cell/Kconfig +++ b/arch/powerpc/platforms/cell/Kconfig | |||
@@ -113,34 +113,10 @@ config CBE_THERM | |||
113 | default m | 113 | default m |
114 | depends on CBE_RAS && SPU_BASE | 114 | depends on CBE_RAS && SPU_BASE |
115 | 115 | ||
116 | config CBE_CPUFREQ | ||
117 | tristate "CBE frequency scaling" | ||
118 | depends on CBE_RAS && CPU_FREQ | ||
119 | default m | ||
120 | help | ||
121 | This adds the cpufreq driver for Cell BE processors. | ||
122 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
123 | If you don't have such processor, say N | ||
124 | |||
125 | config CBE_CPUFREQ_PMI_ENABLE | ||
126 | bool "CBE frequency scaling using PMI interface" | ||
127 | depends on CBE_CPUFREQ | ||
128 | default n | ||
129 | help | ||
130 | Select this, if you want to use the PMI interface | ||
131 | to switch frequencies. Using PMI, the | ||
132 | processor will not only be able to run at lower speed, | ||
133 | but also at lower core voltage. | ||
134 | |||
135 | config CBE_CPUFREQ_PMI | ||
136 | tristate | ||
137 | depends on CBE_CPUFREQ_PMI_ENABLE | ||
138 | default CBE_CPUFREQ | ||
139 | |||
140 | config PPC_PMI | 116 | config PPC_PMI |
141 | tristate | 117 | tristate |
142 | default y | 118 | default y |
143 | depends on CBE_CPUFREQ_PMI || PPC_IBM_CELL_POWERBUTTON | 119 | depends on CPU_FREQ_CBE_PMI || PPC_IBM_CELL_POWERBUTTON |
144 | help | 120 | help |
145 | PMI (Platform Management Interrupt) is a way to | 121 | PMI (Platform Management Interrupt) is a way to |
146 | communicate with the BMC (Baseboard Management Controller). | 122 | communicate with the BMC (Baseboard Management Controller). |
diff --git a/arch/powerpc/platforms/cell/Makefile b/arch/powerpc/platforms/cell/Makefile index a4a89350bcfc..fe053e7c73ee 100644 --- a/arch/powerpc/platforms/cell/Makefile +++ b/arch/powerpc/platforms/cell/Makefile | |||
@@ -5,9 +5,6 @@ obj-$(CONFIG_PPC_CELL_NATIVE) += iommu.o setup.o spider-pic.o \ | |||
5 | obj-$(CONFIG_CBE_RAS) += ras.o | 5 | obj-$(CONFIG_CBE_RAS) += ras.o |
6 | 6 | ||
7 | obj-$(CONFIG_CBE_THERM) += cbe_thermal.o | 7 | obj-$(CONFIG_CBE_THERM) += cbe_thermal.o |
8 | obj-$(CONFIG_CBE_CPUFREQ_PMI) += cbe_cpufreq_pmi.o | ||
9 | obj-$(CONFIG_CBE_CPUFREQ) += cbe-cpufreq.o | ||
10 | cbe-cpufreq-y += cbe_cpufreq_pervasive.o cbe_cpufreq.o | ||
11 | obj-$(CONFIG_CBE_CPUFREQ_SPU_GOVERNOR) += cpufreq_spudemand.o | 8 | obj-$(CONFIG_CBE_CPUFREQ_SPU_GOVERNOR) += cpufreq_spudemand.o |
12 | 9 | ||
13 | obj-$(CONFIG_PPC_IBM_CELL_POWERBUTTON) += cbe_powerbutton.o | 10 | obj-$(CONFIG_PPC_IBM_CELL_POWERBUTTON) += cbe_powerbutton.o |
diff --git a/arch/powerpc/platforms/pasemi/cpufreq.c b/arch/powerpc/platforms/pasemi/cpufreq.c index 890f30e70f98..be1e7958909e 100644 --- a/arch/powerpc/platforms/pasemi/cpufreq.c +++ b/arch/powerpc/platforms/pasemi/cpufreq.c | |||
@@ -273,10 +273,9 @@ static int pas_cpufreq_target(struct cpufreq_policy *policy, | |||
273 | 273 | ||
274 | freqs.old = policy->cur; | 274 | freqs.old = policy->cur; |
275 | freqs.new = pas_freqs[pas_astate_new].frequency; | 275 | freqs.new = pas_freqs[pas_astate_new].frequency; |
276 | freqs.cpu = policy->cpu; | ||
277 | 276 | ||
278 | mutex_lock(&pas_switch_mutex); | 277 | mutex_lock(&pas_switch_mutex); |
279 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 278 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
280 | 279 | ||
281 | pr_debug("setting frequency for cpu %d to %d kHz, 1/%d of max frequency\n", | 280 | pr_debug("setting frequency for cpu %d to %d kHz, 1/%d of max frequency\n", |
282 | policy->cpu, | 281 | policy->cpu, |
@@ -288,7 +287,7 @@ static int pas_cpufreq_target(struct cpufreq_policy *policy, | |||
288 | for_each_online_cpu(i) | 287 | for_each_online_cpu(i) |
289 | set_astate(i, pas_astate_new); | 288 | set_astate(i, pas_astate_new); |
290 | 289 | ||
291 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 290 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
292 | mutex_unlock(&pas_switch_mutex); | 291 | mutex_unlock(&pas_switch_mutex); |
293 | 292 | ||
294 | ppc_proc_freq = freqs.new * 1000ul; | 293 | ppc_proc_freq = freqs.new * 1000ul; |
diff --git a/arch/powerpc/platforms/powermac/cpufreq_32.c b/arch/powerpc/platforms/powermac/cpufreq_32.c index 311b804353b1..3104fad82480 100644 --- a/arch/powerpc/platforms/powermac/cpufreq_32.c +++ b/arch/powerpc/platforms/powermac/cpufreq_32.c | |||
@@ -335,7 +335,8 @@ static int pmu_set_cpu_speed(int low_speed) | |||
335 | return 0; | 335 | return 0; |
336 | } | 336 | } |
337 | 337 | ||
338 | static int do_set_cpu_speed(int speed_mode, int notify) | 338 | static int do_set_cpu_speed(struct cpufreq_policy *policy, int speed_mode, |
339 | int notify) | ||
339 | { | 340 | { |
340 | struct cpufreq_freqs freqs; | 341 | struct cpufreq_freqs freqs; |
341 | unsigned long l3cr; | 342 | unsigned long l3cr; |
@@ -343,13 +344,12 @@ static int do_set_cpu_speed(int speed_mode, int notify) | |||
343 | 344 | ||
344 | freqs.old = cur_freq; | 345 | freqs.old = cur_freq; |
345 | freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; | 346 | freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; |
346 | freqs.cpu = smp_processor_id(); | ||
347 | 347 | ||
348 | if (freqs.old == freqs.new) | 348 | if (freqs.old == freqs.new) |
349 | return 0; | 349 | return 0; |
350 | 350 | ||
351 | if (notify) | 351 | if (notify) |
352 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 352 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
353 | if (speed_mode == CPUFREQ_LOW && | 353 | if (speed_mode == CPUFREQ_LOW && |
354 | cpu_has_feature(CPU_FTR_L3CR)) { | 354 | cpu_has_feature(CPU_FTR_L3CR)) { |
355 | l3cr = _get_L3CR(); | 355 | l3cr = _get_L3CR(); |
@@ -366,7 +366,7 @@ static int do_set_cpu_speed(int speed_mode, int notify) | |||
366 | _set_L3CR(prev_l3cr); | 366 | _set_L3CR(prev_l3cr); |
367 | } | 367 | } |
368 | if (notify) | 368 | if (notify) |
369 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 369 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
370 | cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; | 370 | cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; |
371 | 371 | ||
372 | return 0; | 372 | return 0; |
@@ -393,7 +393,7 @@ static int pmac_cpufreq_target( struct cpufreq_policy *policy, | |||
393 | target_freq, relation, &newstate)) | 393 | target_freq, relation, &newstate)) |
394 | return -EINVAL; | 394 | return -EINVAL; |
395 | 395 | ||
396 | rc = do_set_cpu_speed(newstate, 1); | 396 | rc = do_set_cpu_speed(policy, newstate, 1); |
397 | 397 | ||
398 | ppc_proc_freq = cur_freq * 1000ul; | 398 | ppc_proc_freq = cur_freq * 1000ul; |
399 | return rc; | 399 | return rc; |
@@ -442,7 +442,7 @@ static int pmac_cpufreq_suspend(struct cpufreq_policy *policy) | |||
442 | no_schedule = 1; | 442 | no_schedule = 1; |
443 | sleep_freq = cur_freq; | 443 | sleep_freq = cur_freq; |
444 | if (cur_freq == low_freq && !is_pmu_based) | 444 | if (cur_freq == low_freq && !is_pmu_based) |
445 | do_set_cpu_speed(CPUFREQ_HIGH, 0); | 445 | do_set_cpu_speed(policy, CPUFREQ_HIGH, 0); |
446 | return 0; | 446 | return 0; |
447 | } | 447 | } |
448 | 448 | ||
@@ -458,7 +458,7 @@ static int pmac_cpufreq_resume(struct cpufreq_policy *policy) | |||
458 | * is that we force a switch to whatever it was, which is | 458 | * is that we force a switch to whatever it was, which is |
459 | * probably high speed due to our suspend() routine | 459 | * probably high speed due to our suspend() routine |
460 | */ | 460 | */ |
461 | do_set_cpu_speed(sleep_freq == low_freq ? | 461 | do_set_cpu_speed(policy, sleep_freq == low_freq ? |
462 | CPUFREQ_LOW : CPUFREQ_HIGH, 0); | 462 | CPUFREQ_LOW : CPUFREQ_HIGH, 0); |
463 | 463 | ||
464 | ppc_proc_freq = cur_freq * 1000ul; | 464 | ppc_proc_freq = cur_freq * 1000ul; |
diff --git a/arch/powerpc/platforms/powermac/cpufreq_64.c b/arch/powerpc/platforms/powermac/cpufreq_64.c index 9650c6029c82..7ba423431cfe 100644 --- a/arch/powerpc/platforms/powermac/cpufreq_64.c +++ b/arch/powerpc/platforms/powermac/cpufreq_64.c | |||
@@ -339,11 +339,10 @@ static int g5_cpufreq_target(struct cpufreq_policy *policy, | |||
339 | 339 | ||
340 | freqs.old = g5_cpu_freqs[g5_pmode_cur].frequency; | 340 | freqs.old = g5_cpu_freqs[g5_pmode_cur].frequency; |
341 | freqs.new = g5_cpu_freqs[newstate].frequency; | 341 | freqs.new = g5_cpu_freqs[newstate].frequency; |
342 | freqs.cpu = 0; | ||
343 | 342 | ||
344 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 343 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
345 | rc = g5_switch_freq(newstate); | 344 | rc = g5_switch_freq(newstate); |
346 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 345 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
347 | 346 | ||
348 | mutex_unlock(&g5_switch_mutex); | 347 | mutex_unlock(&g5_switch_mutex); |
349 | 348 | ||
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 5e859633ce69..06e3163bdc11 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig | |||
@@ -624,25 +624,7 @@ config SH_CLK_CPG_LEGACY | |||
624 | endmenu | 624 | endmenu |
625 | 625 | ||
626 | menu "CPU Frequency scaling" | 626 | menu "CPU Frequency scaling" |
627 | |||
628 | source "drivers/cpufreq/Kconfig" | 627 | source "drivers/cpufreq/Kconfig" |
629 | |||
630 | config SH_CPU_FREQ | ||
631 | tristate "SuperH CPU Frequency driver" | ||
632 | depends on CPU_FREQ | ||
633 | select CPU_FREQ_TABLE | ||
634 | help | ||
635 | This adds the cpufreq driver for SuperH. Any CPU that supports | ||
636 | clock rate rounding through the clock framework can use this | ||
637 | driver. While it will make the kernel slightly larger, this is | ||
638 | harmless for CPUs that don't support rate rounding. The driver | ||
639 | will also generate a notice in the boot log before disabling | ||
640 | itself if the CPU in question is not capable of rate rounding. | ||
641 | |||
642 | For details, take a look at <file:Documentation/cpu-freq>. | ||
643 | |||
644 | If unsure, say N. | ||
645 | |||
646 | endmenu | 628 | endmenu |
647 | 629 | ||
648 | source "arch/sh/drivers/Kconfig" | 630 | source "arch/sh/drivers/Kconfig" |
diff --git a/arch/sh/kernel/Makefile b/arch/sh/kernel/Makefile index f259b37874e9..261c8bfd75ce 100644 --- a/arch/sh/kernel/Makefile +++ b/arch/sh/kernel/Makefile | |||
@@ -31,7 +31,6 @@ obj-$(CONFIG_VSYSCALL) += vsyscall/ | |||
31 | obj-$(CONFIG_SMP) += smp.o | 31 | obj-$(CONFIG_SMP) += smp.o |
32 | obj-$(CONFIG_SH_STANDARD_BIOS) += sh_bios.o | 32 | obj-$(CONFIG_SH_STANDARD_BIOS) += sh_bios.o |
33 | obj-$(CONFIG_KGDB) += kgdb.o | 33 | obj-$(CONFIG_KGDB) += kgdb.o |
34 | obj-$(CONFIG_SH_CPU_FREQ) += cpufreq.o | ||
35 | obj-$(CONFIG_MODULES) += sh_ksyms_$(BITS).o module.o | 34 | obj-$(CONFIG_MODULES) += sh_ksyms_$(BITS).o module.o |
36 | obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o | 35 | obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o |
37 | obj-$(CONFIG_CRASH_DUMP) += crash_dump.o | 36 | obj-$(CONFIG_CRASH_DUMP) += crash_dump.o |
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 3d361f236308..c85b76100e3f 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig | |||
@@ -254,29 +254,6 @@ config HOTPLUG_CPU | |||
254 | 254 | ||
255 | if SPARC64 | 255 | if SPARC64 |
256 | source "drivers/cpufreq/Kconfig" | 256 | source "drivers/cpufreq/Kconfig" |
257 | |||
258 | config US3_FREQ | ||
259 | tristate "UltraSPARC-III CPU Frequency driver" | ||
260 | depends on CPU_FREQ | ||
261 | select CPU_FREQ_TABLE | ||
262 | help | ||
263 | This adds the CPUFreq driver for UltraSPARC-III processors. | ||
264 | |||
265 | For details, take a look at <file:Documentation/cpu-freq>. | ||
266 | |||
267 | If in doubt, say N. | ||
268 | |||
269 | config US2E_FREQ | ||
270 | tristate "UltraSPARC-IIe CPU Frequency driver" | ||
271 | depends on CPU_FREQ | ||
272 | select CPU_FREQ_TABLE | ||
273 | help | ||
274 | This adds the CPUFreq driver for UltraSPARC-IIe processors. | ||
275 | |||
276 | For details, take a look at <file:Documentation/cpu-freq>. | ||
277 | |||
278 | If in doubt, say N. | ||
279 | |||
280 | endif | 257 | endif |
281 | 258 | ||
282 | config US3_MC | 259 | config US3_MC |
diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile index 6cf591b7e1c6..5276fd4e9d03 100644 --- a/arch/sparc/kernel/Makefile +++ b/arch/sparc/kernel/Makefile | |||
@@ -102,9 +102,6 @@ obj-$(CONFIG_PCI_MSI) += pci_msi.o | |||
102 | 102 | ||
103 | obj-$(CONFIG_COMPAT) += sys32.o sys_sparc32.o signal32.o | 103 | obj-$(CONFIG_COMPAT) += sys32.o sys_sparc32.o signal32.o |
104 | 104 | ||
105 | # sparc64 cpufreq | ||
106 | obj-$(CONFIG_US3_FREQ) += us3_cpufreq.o | ||
107 | obj-$(CONFIG_US2E_FREQ) += us2e_cpufreq.o | ||
108 | obj-$(CONFIG_US3_MC) += chmc.o | 105 | obj-$(CONFIG_US3_MC) += chmc.o |
109 | 106 | ||
110 | obj-$(CONFIG_KPROBES) += kprobes.o | 107 | obj-$(CONFIG_KPROBES) += kprobes.o |
diff --git a/arch/unicore32/kernel/Makefile b/arch/unicore32/kernel/Makefile index fa497e0efe5a..607a72f2ae35 100644 --- a/arch/unicore32/kernel/Makefile +++ b/arch/unicore32/kernel/Makefile | |||
@@ -9,7 +9,6 @@ obj-y += setup.o signal.o sys.o stacktrace.o traps.o | |||
9 | obj-$(CONFIG_MODULES) += ksyms.o module.o | 9 | obj-$(CONFIG_MODULES) += ksyms.o module.o |
10 | obj-$(CONFIG_EARLY_PRINTK) += early_printk.o | 10 | obj-$(CONFIG_EARLY_PRINTK) += early_printk.o |
11 | 11 | ||
12 | obj-$(CONFIG_CPU_FREQ) += cpu-ucv2.o | ||
13 | obj-$(CONFIG_UNICORE_FPU_F64) += fpu-ucf64.o | 12 | obj-$(CONFIG_UNICORE_FPU_F64) += fpu-ucf64.o |
14 | 13 | ||
15 | # obj-y for architecture PKUnity v3 | 14 | # obj-y for architecture PKUnity v3 |
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 93fe929d1cee..9e22520a97ee 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h | |||
@@ -182,6 +182,7 @@ | |||
182 | #define X86_FEATURE_PTS (7*32+ 6) /* Intel Package Thermal Status */ | 182 | #define X86_FEATURE_PTS (7*32+ 6) /* Intel Package Thermal Status */ |
183 | #define X86_FEATURE_DTHERM (7*32+ 7) /* Digital Thermal Sensor */ | 183 | #define X86_FEATURE_DTHERM (7*32+ 7) /* Digital Thermal Sensor */ |
184 | #define X86_FEATURE_HW_PSTATE (7*32+ 8) /* AMD HW-PState */ | 184 | #define X86_FEATURE_HW_PSTATE (7*32+ 8) /* AMD HW-PState */ |
185 | #define X86_FEATURE_PROC_FEEDBACK (7*32+ 9) /* AMD ProcFeedbackInterface */ | ||
185 | 186 | ||
186 | /* Virtualization flags: Linux defined, word 8 */ | 187 | /* Virtualization flags: Linux defined, word 8 */ |
187 | #define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */ | 188 | #define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */ |
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index ee8e9abc859f..d92b5dad15dd 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c | |||
@@ -39,8 +39,9 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c) | |||
39 | { X86_FEATURE_APERFMPERF, CR_ECX, 0, 0x00000006, 0 }, | 39 | { X86_FEATURE_APERFMPERF, CR_ECX, 0, 0x00000006, 0 }, |
40 | { X86_FEATURE_EPB, CR_ECX, 3, 0x00000006, 0 }, | 40 | { X86_FEATURE_EPB, CR_ECX, 3, 0x00000006, 0 }, |
41 | { X86_FEATURE_XSAVEOPT, CR_EAX, 0, 0x0000000d, 1 }, | 41 | { X86_FEATURE_XSAVEOPT, CR_EAX, 0, 0x0000000d, 1 }, |
42 | { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007, 0 }, | ||
43 | { X86_FEATURE_HW_PSTATE, CR_EDX, 7, 0x80000007, 0 }, | 42 | { X86_FEATURE_HW_PSTATE, CR_EDX, 7, 0x80000007, 0 }, |
43 | { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007, 0 }, | ||
44 | { X86_FEATURE_PROC_FEEDBACK, CR_EDX,11, 0x80000007, 0 }, | ||
44 | { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a, 0 }, | 45 | { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a, 0 }, |
45 | { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a, 0 }, | 46 | { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a, 0 }, |
46 | { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a, 0 }, | 47 | { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a, 0 }, |
diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig index cbcb21e32771..a1488f58f6ca 100644 --- a/drivers/cpufreq/Kconfig +++ b/drivers/cpufreq/Kconfig | |||
@@ -205,10 +205,99 @@ depends on ARM | |||
205 | source "drivers/cpufreq/Kconfig.arm" | 205 | source "drivers/cpufreq/Kconfig.arm" |
206 | endmenu | 206 | endmenu |
207 | 207 | ||
208 | menu "AVR32 CPU frequency scaling drivers" | ||
209 | depends on AVR32 | ||
210 | |||
211 | config AVR32_AT32AP_CPUFREQ | ||
212 | bool "CPU frequency driver for AT32AP" | ||
213 | depends on PLATFORM_AT32AP | ||
214 | default n | ||
215 | help | ||
216 | This enables the CPU frequency driver for AT32AP processors. | ||
217 | If in doubt, say N. | ||
218 | |||
219 | endmenu | ||
220 | |||
221 | menu "CPUFreq processor drivers" | ||
222 | depends on IA64 | ||
223 | |||
224 | config IA64_ACPI_CPUFREQ | ||
225 | tristate "ACPI Processor P-States driver" | ||
226 | select CPU_FREQ_TABLE | ||
227 | depends on ACPI_PROCESSOR | ||
228 | help | ||
229 | This driver adds a CPUFreq driver which utilizes the ACPI | ||
230 | Processor Performance States. | ||
231 | |||
232 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
233 | |||
234 | If in doubt, say N. | ||
235 | |||
236 | endmenu | ||
237 | |||
238 | menu "MIPS CPUFreq processor drivers" | ||
239 | depends on MIPS | ||
240 | |||
241 | config LOONGSON2_CPUFREQ | ||
242 | tristate "Loongson2 CPUFreq Driver" | ||
243 | select CPU_FREQ_TABLE | ||
244 | help | ||
245 | This option adds a CPUFreq driver for loongson processors which | ||
246 | support software configurable cpu frequency. | ||
247 | |||
248 | Loongson2F and it's successors support this feature. | ||
249 | |||
250 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
251 | |||
252 | If in doubt, say N. | ||
253 | |||
254 | endmenu | ||
255 | |||
208 | menu "PowerPC CPU frequency scaling drivers" | 256 | menu "PowerPC CPU frequency scaling drivers" |
209 | depends on PPC32 || PPC64 | 257 | depends on PPC32 || PPC64 |
210 | source "drivers/cpufreq/Kconfig.powerpc" | 258 | source "drivers/cpufreq/Kconfig.powerpc" |
211 | endmenu | 259 | endmenu |
212 | 260 | ||
261 | menu "SPARC CPU frequency scaling drivers" | ||
262 | depends on SPARC64 | ||
263 | config SPARC_US3_CPUFREQ | ||
264 | tristate "UltraSPARC-III CPU Frequency driver" | ||
265 | select CPU_FREQ_TABLE | ||
266 | help | ||
267 | This adds the CPUFreq driver for UltraSPARC-III processors. | ||
268 | |||
269 | For details, take a look at <file:Documentation/cpu-freq>. | ||
270 | |||
271 | If in doubt, say N. | ||
272 | |||
273 | config SPARC_US2E_CPUFREQ | ||
274 | tristate "UltraSPARC-IIe CPU Frequency driver" | ||
275 | select CPU_FREQ_TABLE | ||
276 | help | ||
277 | This adds the CPUFreq driver for UltraSPARC-IIe processors. | ||
278 | |||
279 | For details, take a look at <file:Documentation/cpu-freq>. | ||
280 | |||
281 | If in doubt, say N. | ||
282 | endmenu | ||
283 | |||
284 | menu "SH CPU Frequency scaling" | ||
285 | depends on SUPERH | ||
286 | config SH_CPU_FREQ | ||
287 | tristate "SuperH CPU Frequency driver" | ||
288 | select CPU_FREQ_TABLE | ||
289 | help | ||
290 | This adds the cpufreq driver for SuperH. Any CPU that supports | ||
291 | clock rate rounding through the clock framework can use this | ||
292 | driver. While it will make the kernel slightly larger, this is | ||
293 | harmless for CPUs that don't support rate rounding. The driver | ||
294 | will also generate a notice in the boot log before disabling | ||
295 | itself if the CPU in question is not capable of rate rounding. | ||
296 | |||
297 | For details, take a look at <file:Documentation/cpu-freq>. | ||
298 | |||
299 | If unsure, say N. | ||
300 | endmenu | ||
301 | |||
213 | endif | 302 | endif |
214 | endmenu | 303 | endmenu |
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index 030ddf6dd3f1..f3af18b9acc5 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm | |||
@@ -2,6 +2,93 @@ | |||
2 | # ARM CPU Frequency scaling drivers | 2 | # ARM CPU Frequency scaling drivers |
3 | # | 3 | # |
4 | 4 | ||
5 | config ARM_BIG_LITTLE_CPUFREQ | ||
6 | tristate | ||
7 | depends on ARM_CPU_TOPOLOGY | ||
8 | |||
9 | config ARM_DT_BL_CPUFREQ | ||
10 | tristate "Generic ARM big LITTLE CPUfreq driver probed via DT" | ||
11 | select ARM_BIG_LITTLE_CPUFREQ | ||
12 | depends on OF && HAVE_CLK | ||
13 | help | ||
14 | This enables the Generic CPUfreq driver for ARM big.LITTLE platform. | ||
15 | This gets frequency tables from DT. | ||
16 | |||
17 | config ARM_EXYNOS_CPUFREQ | ||
18 | bool "SAMSUNG EXYNOS SoCs" | ||
19 | depends on ARCH_EXYNOS | ||
20 | default y | ||
21 | help | ||
22 | This adds the CPUFreq driver common part for Samsung | ||
23 | EXYNOS SoCs. | ||
24 | |||
25 | If in doubt, say N. | ||
26 | |||
27 | config ARM_EXYNOS4210_CPUFREQ | ||
28 | def_bool CPU_EXYNOS4210 | ||
29 | help | ||
30 | This adds the CPUFreq driver for Samsung EXYNOS4210 | ||
31 | SoC (S5PV310 or S5PC210). | ||
32 | |||
33 | config ARM_EXYNOS4X12_CPUFREQ | ||
34 | def_bool (SOC_EXYNOS4212 || SOC_EXYNOS4412) | ||
35 | help | ||
36 | This adds the CPUFreq driver for Samsung EXYNOS4X12 | ||
37 | SoC (EXYNOS4212 or EXYNOS4412). | ||
38 | |||
39 | config ARM_EXYNOS5250_CPUFREQ | ||
40 | def_bool SOC_EXYNOS5250 | ||
41 | help | ||
42 | This adds the CPUFreq driver for Samsung EXYNOS5250 | ||
43 | SoC. | ||
44 | |||
45 | config ARM_EXYNOS5440_CPUFREQ | ||
46 | def_bool SOC_EXYNOS5440 | ||
47 | depends on HAVE_CLK && PM_OPP && OF | ||
48 | help | ||
49 | This adds the CPUFreq driver for Samsung EXYNOS5440 | ||
50 | SoC. The nature of exynos5440 clock controller is | ||
51 | different than previous exynos controllers so not using | ||
52 | the common exynos framework. | ||
53 | |||
54 | config ARM_HIGHBANK_CPUFREQ | ||
55 | tristate "Calxeda Highbank-based" | ||
56 | depends on ARCH_HIGHBANK | ||
57 | select CPU_FREQ_TABLE | ||
58 | select GENERIC_CPUFREQ_CPU0 | ||
59 | select PM_OPP | ||
60 | select REGULATOR | ||
61 | |||
62 | default m | ||
63 | help | ||
64 | This adds the CPUFreq driver for Calxeda Highbank SoC | ||
65 | based boards. | ||
66 | |||
67 | If in doubt, say N. | ||
68 | |||
69 | config ARM_IMX6Q_CPUFREQ | ||
70 | tristate "Freescale i.MX6Q cpufreq support" | ||
71 | depends on SOC_IMX6Q | ||
72 | depends on REGULATOR_ANATOP | ||
73 | help | ||
74 | This adds cpufreq driver support for Freescale i.MX6Q SOC. | ||
75 | |||
76 | If in doubt, say N. | ||
77 | |||
78 | config ARM_INTEGRATOR | ||
79 | tristate "CPUfreq driver for ARM Integrator CPUs" | ||
80 | depends on ARCH_INTEGRATOR | ||
81 | default y | ||
82 | help | ||
83 | This enables the CPUfreq driver for ARM Integrator CPUs. | ||
84 | If in doubt, say Y. | ||
85 | |||
86 | config ARM_KIRKWOOD_CPUFREQ | ||
87 | def_bool ARCH_KIRKWOOD && OF | ||
88 | help | ||
89 | This adds the CPUFreq driver for Marvell Kirkwood | ||
90 | SoCs. | ||
91 | |||
5 | config ARM_OMAP2PLUS_CPUFREQ | 92 | config ARM_OMAP2PLUS_CPUFREQ |
6 | bool "TI OMAP2+" | 93 | bool "TI OMAP2+" |
7 | depends on ARCH_OMAP2PLUS | 94 | depends on ARCH_OMAP2PLUS |
@@ -42,6 +129,7 @@ config ARM_S3C64XX_CPUFREQ | |||
42 | config ARM_S5PV210_CPUFREQ | 129 | config ARM_S5PV210_CPUFREQ |
43 | bool "Samsung S5PV210 and S5PC110" | 130 | bool "Samsung S5PV210 and S5PC110" |
44 | depends on CPU_S5PV210 | 131 | depends on CPU_S5PV210 |
132 | select CPU_FREQ_TABLE | ||
45 | default y | 133 | default y |
46 | help | 134 | help |
47 | This adds the CPUFreq driver for Samsung S5PV210 and | 135 | This adds the CPUFreq driver for Samsung S5PV210 and |
@@ -49,48 +137,11 @@ config ARM_S5PV210_CPUFREQ | |||
49 | 137 | ||
50 | If in doubt, say N. | 138 | If in doubt, say N. |
51 | 139 | ||
52 | config ARM_EXYNOS_CPUFREQ | 140 | config ARM_SA1100_CPUFREQ |
53 | bool "SAMSUNG EXYNOS SoCs" | 141 | bool |
54 | depends on ARCH_EXYNOS | ||
55 | default y | ||
56 | help | ||
57 | This adds the CPUFreq driver common part for Samsung | ||
58 | EXYNOS SoCs. | ||
59 | |||
60 | If in doubt, say N. | ||
61 | 142 | ||
62 | config ARM_EXYNOS4210_CPUFREQ | 143 | config ARM_SA1110_CPUFREQ |
63 | def_bool CPU_EXYNOS4210 | 144 | bool |
64 | help | ||
65 | This adds the CPUFreq driver for Samsung EXYNOS4210 | ||
66 | SoC (S5PV310 or S5PC210). | ||
67 | |||
68 | config ARM_EXYNOS4X12_CPUFREQ | ||
69 | def_bool (SOC_EXYNOS4212 || SOC_EXYNOS4412) | ||
70 | help | ||
71 | This adds the CPUFreq driver for Samsung EXYNOS4X12 | ||
72 | SoC (EXYNOS4212 or EXYNOS4412). | ||
73 | |||
74 | config ARM_EXYNOS5250_CPUFREQ | ||
75 | def_bool SOC_EXYNOS5250 | ||
76 | help | ||
77 | This adds the CPUFreq driver for Samsung EXYNOS5250 | ||
78 | SoC. | ||
79 | |||
80 | config ARM_KIRKWOOD_CPUFREQ | ||
81 | def_bool ARCH_KIRKWOOD && OF | ||
82 | help | ||
83 | This adds the CPUFreq driver for Marvell Kirkwood | ||
84 | SoCs. | ||
85 | |||
86 | config ARM_IMX6Q_CPUFREQ | ||
87 | tristate "Freescale i.MX6Q cpufreq support" | ||
88 | depends on SOC_IMX6Q | ||
89 | depends on REGULATOR_ANATOP | ||
90 | help | ||
91 | This adds cpufreq driver support for Freescale i.MX6Q SOC. | ||
92 | |||
93 | If in doubt, say N. | ||
94 | 145 | ||
95 | config ARM_SPEAR_CPUFREQ | 146 | config ARM_SPEAR_CPUFREQ |
96 | bool "SPEAr CPUFreq support" | 147 | bool "SPEAr CPUFreq support" |
@@ -98,18 +149,3 @@ config ARM_SPEAR_CPUFREQ | |||
98 | default y | 149 | default y |
99 | help | 150 | help |
100 | This adds the CPUFreq driver support for SPEAr SOCs. | 151 | This adds the CPUFreq driver support for SPEAr SOCs. |
101 | |||
102 | config ARM_HIGHBANK_CPUFREQ | ||
103 | tristate "Calxeda Highbank-based" | ||
104 | depends on ARCH_HIGHBANK | ||
105 | select CPU_FREQ_TABLE | ||
106 | select GENERIC_CPUFREQ_CPU0 | ||
107 | select PM_OPP | ||
108 | select REGULATOR | ||
109 | |||
110 | default m | ||
111 | help | ||
112 | This adds the CPUFreq driver for Calxeda Highbank SoC | ||
113 | based boards. | ||
114 | |||
115 | If in doubt, say N. | ||
diff --git a/drivers/cpufreq/Kconfig.powerpc b/drivers/cpufreq/Kconfig.powerpc index e76992f79683..9c926ca0d718 100644 --- a/drivers/cpufreq/Kconfig.powerpc +++ b/drivers/cpufreq/Kconfig.powerpc | |||
@@ -1,3 +1,21 @@ | |||
1 | config CPU_FREQ_CBE | ||
2 | tristate "CBE frequency scaling" | ||
3 | depends on CBE_RAS && PPC_CELL | ||
4 | default m | ||
5 | help | ||
6 | This adds the cpufreq driver for Cell BE processors. | ||
7 | For details, take a look at <file:Documentation/cpu-freq/>. | ||
8 | If you don't have such processor, say N | ||
9 | |||
10 | config CPU_FREQ_CBE_PMI | ||
11 | bool "CBE frequency scaling using PMI interface" | ||
12 | depends on CPU_FREQ_CBE | ||
13 | default n | ||
14 | help | ||
15 | Select this, if you want to use the PMI interface to switch | ||
16 | frequencies. Using PMI, the processor will not only be able to run at | ||
17 | lower speed, but also at lower core voltage. | ||
18 | |||
1 | config CPU_FREQ_MAPLE | 19 | config CPU_FREQ_MAPLE |
2 | bool "Support for Maple 970FX Evaluation Board" | 20 | bool "Support for Maple 970FX Evaluation Board" |
3 | depends on PPC_MAPLE | 21 | depends on PPC_MAPLE |
diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86 index d7dc0ed6adb0..2b8a8c374548 100644 --- a/drivers/cpufreq/Kconfig.x86 +++ b/drivers/cpufreq/Kconfig.x86 | |||
@@ -129,6 +129,23 @@ config X86_POWERNOW_K8 | |||
129 | 129 | ||
130 | For details, take a look at <file:Documentation/cpu-freq/>. | 130 | For details, take a look at <file:Documentation/cpu-freq/>. |
131 | 131 | ||
132 | config X86_AMD_FREQ_SENSITIVITY | ||
133 | tristate "AMD frequency sensitivity feedback powersave bias" | ||
134 | depends on CPU_FREQ_GOV_ONDEMAND && X86_ACPI_CPUFREQ && CPU_SUP_AMD | ||
135 | help | ||
136 | This adds AMD-specific powersave bias function to the ondemand | ||
137 | governor, which allows it to make more power-conscious frequency | ||
138 | change decisions based on feedback from hardware (availble on AMD | ||
139 | Family 16h and above). | ||
140 | |||
141 | Hardware feedback tells software how "sensitive" to frequency changes | ||
142 | the CPUs' workloads are. CPU-bound workloads will be more sensitive | ||
143 | -- they will perform better as frequency increases. Memory/IO-bound | ||
144 | workloads will be less sensitive -- they will not necessarily perform | ||
145 | better as frequency increases. | ||
146 | |||
147 | If in doubt, say N. | ||
148 | |||
132 | config X86_GX_SUSPMOD | 149 | config X86_GX_SUSPMOD |
133 | tristate "Cyrix MediaGX/NatSemi Geode Suspend Modulation" | 150 | tristate "Cyrix MediaGX/NatSemi Geode Suspend Modulation" |
134 | depends on X86_32 && PCI | 151 | depends on X86_32 && PCI |
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index 863fd1865d45..315b9231feb1 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile | |||
@@ -41,23 +41,54 @@ obj-$(CONFIG_X86_SPEEDSTEP_CENTRINO) += speedstep-centrino.o | |||
41 | obj-$(CONFIG_X86_P4_CLOCKMOD) += p4-clockmod.o | 41 | obj-$(CONFIG_X86_P4_CLOCKMOD) += p4-clockmod.o |
42 | obj-$(CONFIG_X86_CPUFREQ_NFORCE2) += cpufreq-nforce2.o | 42 | obj-$(CONFIG_X86_CPUFREQ_NFORCE2) += cpufreq-nforce2.o |
43 | obj-$(CONFIG_X86_INTEL_PSTATE) += intel_pstate.o | 43 | obj-$(CONFIG_X86_INTEL_PSTATE) += intel_pstate.o |
44 | obj-$(CONFIG_X86_AMD_FREQ_SENSITIVITY) += amd_freq_sensitivity.o | ||
44 | 45 | ||
45 | ################################################################################## | 46 | ################################################################################## |
46 | # ARM SoC drivers | 47 | # ARM SoC drivers |
48 | obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o | ||
49 | # big LITTLE per platform glues. Keep DT_BL_CPUFREQ as the last entry in all big | ||
50 | # LITTLE drivers, so that it is probed last. | ||
51 | obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o | ||
52 | |||
53 | obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o | ||
47 | obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o | 54 | obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o |
48 | obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o | ||
49 | obj-$(CONFIG_ARM_S3C64XX_CPUFREQ) += s3c64xx-cpufreq.o | ||
50 | obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o | ||
51 | obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o | 55 | obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o |
52 | obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o | 56 | obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o |
53 | obj-$(CONFIG_ARM_EXYNOS4X12_CPUFREQ) += exynos4x12-cpufreq.o | 57 | obj-$(CONFIG_ARM_EXYNOS4X12_CPUFREQ) += exynos4x12-cpufreq.o |
54 | obj-$(CONFIG_ARM_EXYNOS5250_CPUFREQ) += exynos5250-cpufreq.o | 58 | obj-$(CONFIG_ARM_EXYNOS5250_CPUFREQ) += exynos5250-cpufreq.o |
59 | obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o | ||
60 | obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o | ||
61 | obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o | ||
62 | obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o | ||
55 | obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o | 63 | obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o |
56 | obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o | 64 | obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o |
65 | obj-$(CONFIG_PXA25x) += pxa2xx-cpufreq.o | ||
66 | obj-$(CONFIG_PXA27x) += pxa2xx-cpufreq.o | ||
67 | obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o | ||
68 | obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o | ||
69 | obj-$(CONFIG_ARM_S3C64XX_CPUFREQ) += s3c64xx-cpufreq.o | ||
70 | obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o | ||
71 | obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o | ||
72 | obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o | ||
57 | obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o | 73 | obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o |
58 | obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o | 74 | obj-$(CONFIG_ARCH_TEGRA) += tegra-cpufreq.o |
59 | obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o | ||
60 | 75 | ||
61 | ################################################################################## | 76 | ################################################################################## |
62 | # PowerPC platform drivers | 77 | # PowerPC platform drivers |
78 | obj-$(CONFIG_CPU_FREQ_CBE) += ppc-cbe-cpufreq.o | ||
79 | ppc-cbe-cpufreq-y += ppc_cbe_cpufreq_pervasive.o ppc_cbe_cpufreq.o | ||
80 | obj-$(CONFIG_CPU_FREQ_CBE_PMI) += ppc_cbe_cpufreq_pmi.o | ||
63 | obj-$(CONFIG_CPU_FREQ_MAPLE) += maple-cpufreq.o | 81 | obj-$(CONFIG_CPU_FREQ_MAPLE) += maple-cpufreq.o |
82 | |||
83 | ################################################################################## | ||
84 | # Other platform drivers | ||
85 | obj-$(CONFIG_AVR32_AT32AP_CPUFREQ) += at32ap-cpufreq.o | ||
86 | obj-$(CONFIG_BLACKFIN) += blackfin-cpufreq.o | ||
87 | obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o | ||
88 | obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o | ||
89 | obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o | ||
90 | obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o | ||
91 | obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o | ||
92 | obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o | ||
93 | obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o | ||
94 | obj-$(CONFIG_UNICORE32) += unicore2-cpufreq.o | ||
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c index 57a8774f0b4e..11b8b4b54ceb 100644 --- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c | |||
@@ -423,7 +423,6 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, | |||
423 | struct drv_cmd cmd; | 423 | struct drv_cmd cmd; |
424 | unsigned int next_state = 0; /* Index into freq_table */ | 424 | unsigned int next_state = 0; /* Index into freq_table */ |
425 | unsigned int next_perf_state = 0; /* Index into perf table */ | 425 | unsigned int next_perf_state = 0; /* Index into perf table */ |
426 | unsigned int i; | ||
427 | int result = 0; | 426 | int result = 0; |
428 | 427 | ||
429 | pr_debug("acpi_cpufreq_target %d (%d)\n", target_freq, policy->cpu); | 428 | pr_debug("acpi_cpufreq_target %d (%d)\n", target_freq, policy->cpu); |
@@ -486,10 +485,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, | |||
486 | 485 | ||
487 | freqs.old = perf->states[perf->state].core_frequency * 1000; | 486 | freqs.old = perf->states[perf->state].core_frequency * 1000; |
488 | freqs.new = data->freq_table[next_state].frequency; | 487 | freqs.new = data->freq_table[next_state].frequency; |
489 | for_each_cpu(i, policy->cpus) { | 488 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
490 | freqs.cpu = i; | ||
491 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
492 | } | ||
493 | 489 | ||
494 | drv_write(&cmd); | 490 | drv_write(&cmd); |
495 | 491 | ||
@@ -502,10 +498,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, | |||
502 | } | 498 | } |
503 | } | 499 | } |
504 | 500 | ||
505 | for_each_cpu(i, policy->cpus) { | 501 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
506 | freqs.cpu = i; | ||
507 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
508 | } | ||
509 | perf->state = next_perf_state; | 502 | perf->state = next_perf_state; |
510 | 503 | ||
511 | out: | 504 | out: |
diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c new file mode 100644 index 000000000000..f6b79ab0070b --- /dev/null +++ b/drivers/cpufreq/amd_freq_sensitivity.c | |||
@@ -0,0 +1,148 @@ | |||
1 | /* | ||
2 | * amd_freq_sensitivity.c: AMD frequency sensitivity feedback powersave bias | ||
3 | * for the ondemand governor. | ||
4 | * | ||
5 | * Copyright (C) 2013 Advanced Micro Devices, Inc. | ||
6 | * | ||
7 | * Author: Jacob Shin <jacob.shin@amd.com> | ||
8 | * | ||
9 | * This program is free software; you can redistribute it and/or modify | ||
10 | * it under the terms of the GNU General Public License version 2 as | ||
11 | * published by the Free Software Foundation. | ||
12 | */ | ||
13 | |||
14 | #include <linux/kernel.h> | ||
15 | #include <linux/module.h> | ||
16 | #include <linux/types.h> | ||
17 | #include <linux/percpu-defs.h> | ||
18 | #include <linux/init.h> | ||
19 | #include <linux/mod_devicetable.h> | ||
20 | |||
21 | #include <asm/msr.h> | ||
22 | #include <asm/cpufeature.h> | ||
23 | |||
24 | #include "cpufreq_governor.h" | ||
25 | |||
26 | #define MSR_AMD64_FREQ_SENSITIVITY_ACTUAL 0xc0010080 | ||
27 | #define MSR_AMD64_FREQ_SENSITIVITY_REFERENCE 0xc0010081 | ||
28 | #define CLASS_CODE_SHIFT 56 | ||
29 | #define POWERSAVE_BIAS_MAX 1000 | ||
30 | #define POWERSAVE_BIAS_DEF 400 | ||
31 | |||
32 | struct cpu_data_t { | ||
33 | u64 actual; | ||
34 | u64 reference; | ||
35 | unsigned int freq_prev; | ||
36 | }; | ||
37 | |||
38 | static DEFINE_PER_CPU(struct cpu_data_t, cpu_data); | ||
39 | |||
40 | static unsigned int amd_powersave_bias_target(struct cpufreq_policy *policy, | ||
41 | unsigned int freq_next, | ||
42 | unsigned int relation) | ||
43 | { | ||
44 | int sensitivity; | ||
45 | long d_actual, d_reference; | ||
46 | struct msr actual, reference; | ||
47 | struct cpu_data_t *data = &per_cpu(cpu_data, policy->cpu); | ||
48 | struct dbs_data *od_data = policy->governor_data; | ||
49 | struct od_dbs_tuners *od_tuners = od_data->tuners; | ||
50 | struct od_cpu_dbs_info_s *od_info = | ||
51 | od_data->cdata->get_cpu_dbs_info_s(policy->cpu); | ||
52 | |||
53 | if (!od_info->freq_table) | ||
54 | return freq_next; | ||
55 | |||
56 | rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, | ||
57 | &actual.l, &actual.h); | ||
58 | rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_REFERENCE, | ||
59 | &reference.l, &reference.h); | ||
60 | actual.h &= 0x00ffffff; | ||
61 | reference.h &= 0x00ffffff; | ||
62 | |||
63 | /* counter wrapped around, so stay on current frequency */ | ||
64 | if (actual.q < data->actual || reference.q < data->reference) { | ||
65 | freq_next = policy->cur; | ||
66 | goto out; | ||
67 | } | ||
68 | |||
69 | d_actual = actual.q - data->actual; | ||
70 | d_reference = reference.q - data->reference; | ||
71 | |||
72 | /* divide by 0, so stay on current frequency as well */ | ||
73 | if (d_reference == 0) { | ||
74 | freq_next = policy->cur; | ||
75 | goto out; | ||
76 | } | ||
77 | |||
78 | sensitivity = POWERSAVE_BIAS_MAX - | ||
79 | (POWERSAVE_BIAS_MAX * (d_reference - d_actual) / d_reference); | ||
80 | |||
81 | clamp(sensitivity, 0, POWERSAVE_BIAS_MAX); | ||
82 | |||
83 | /* this workload is not CPU bound, so choose a lower freq */ | ||
84 | if (sensitivity < od_tuners->powersave_bias) { | ||
85 | if (data->freq_prev == policy->cur) | ||
86 | freq_next = policy->cur; | ||
87 | |||
88 | if (freq_next > policy->cur) | ||
89 | freq_next = policy->cur; | ||
90 | else if (freq_next < policy->cur) | ||
91 | freq_next = policy->min; | ||
92 | else { | ||
93 | unsigned int index; | ||
94 | |||
95 | cpufreq_frequency_table_target(policy, | ||
96 | od_info->freq_table, policy->cur - 1, | ||
97 | CPUFREQ_RELATION_H, &index); | ||
98 | freq_next = od_info->freq_table[index].frequency; | ||
99 | } | ||
100 | |||
101 | data->freq_prev = freq_next; | ||
102 | } else | ||
103 | data->freq_prev = 0; | ||
104 | |||
105 | out: | ||
106 | data->actual = actual.q; | ||
107 | data->reference = reference.q; | ||
108 | return freq_next; | ||
109 | } | ||
110 | |||
111 | static int __init amd_freq_sensitivity_init(void) | ||
112 | { | ||
113 | u64 val; | ||
114 | |||
115 | if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) | ||
116 | return -ENODEV; | ||
117 | |||
118 | if (!static_cpu_has(X86_FEATURE_PROC_FEEDBACK)) | ||
119 | return -ENODEV; | ||
120 | |||
121 | if (rdmsrl_safe(MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, &val)) | ||
122 | return -ENODEV; | ||
123 | |||
124 | if (!(val >> CLASS_CODE_SHIFT)) | ||
125 | return -ENODEV; | ||
126 | |||
127 | od_register_powersave_bias_handler(amd_powersave_bias_target, | ||
128 | POWERSAVE_BIAS_DEF); | ||
129 | return 0; | ||
130 | } | ||
131 | late_initcall(amd_freq_sensitivity_init); | ||
132 | |||
133 | static void __exit amd_freq_sensitivity_exit(void) | ||
134 | { | ||
135 | od_unregister_powersave_bias_handler(); | ||
136 | } | ||
137 | module_exit(amd_freq_sensitivity_exit); | ||
138 | |||
139 | static const struct x86_cpu_id amd_freq_sensitivity_ids[] = { | ||
140 | X86_FEATURE_MATCH(X86_FEATURE_PROC_FEEDBACK), | ||
141 | {} | ||
142 | }; | ||
143 | MODULE_DEVICE_TABLE(x86cpu, amd_freq_sensitivity_ids); | ||
144 | |||
145 | MODULE_AUTHOR("Jacob Shin <jacob.shin@amd.com>"); | ||
146 | MODULE_DESCRIPTION("AMD frequency sensitivity feedback powersave bias for " | ||
147 | "the ondemand governor."); | ||
148 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c new file mode 100644 index 000000000000..dbdf677d2f36 --- /dev/null +++ b/drivers/cpufreq/arm_big_little.c | |||
@@ -0,0 +1,278 @@ | |||
1 | /* | ||
2 | * ARM big.LITTLE Platforms CPUFreq support | ||
3 | * | ||
4 | * Copyright (C) 2013 ARM Ltd. | ||
5 | * Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> | ||
6 | * | ||
7 | * Copyright (C) 2013 Linaro. | ||
8 | * Viresh Kumar <viresh.kumar@linaro.org> | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify | ||
11 | * it under the terms of the GNU General Public License version 2 as | ||
12 | * published by the Free Software Foundation. | ||
13 | * | ||
14 | * This program is distributed "as is" WITHOUT ANY WARRANTY of any | ||
15 | * kind, whether express or implied; without even the implied warranty | ||
16 | * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | * GNU General Public License for more details. | ||
18 | */ | ||
19 | |||
20 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | ||
21 | |||
22 | #include <linux/clk.h> | ||
23 | #include <linux/cpu.h> | ||
24 | #include <linux/cpufreq.h> | ||
25 | #include <linux/cpumask.h> | ||
26 | #include <linux/export.h> | ||
27 | #include <linux/of_platform.h> | ||
28 | #include <linux/opp.h> | ||
29 | #include <linux/slab.h> | ||
30 | #include <linux/topology.h> | ||
31 | #include <linux/types.h> | ||
32 | |||
33 | #include "arm_big_little.h" | ||
34 | |||
35 | /* Currently we support only two clusters */ | ||
36 | #define MAX_CLUSTERS 2 | ||
37 | |||
38 | static struct cpufreq_arm_bL_ops *arm_bL_ops; | ||
39 | static struct clk *clk[MAX_CLUSTERS]; | ||
40 | static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS]; | ||
41 | static atomic_t cluster_usage[MAX_CLUSTERS] = {ATOMIC_INIT(0), ATOMIC_INIT(0)}; | ||
42 | |||
43 | static int cpu_to_cluster(int cpu) | ||
44 | { | ||
45 | return topology_physical_package_id(cpu); | ||
46 | } | ||
47 | |||
48 | static unsigned int bL_cpufreq_get(unsigned int cpu) | ||
49 | { | ||
50 | u32 cur_cluster = cpu_to_cluster(cpu); | ||
51 | |||
52 | return clk_get_rate(clk[cur_cluster]) / 1000; | ||
53 | } | ||
54 | |||
55 | /* Validate policy frequency range */ | ||
56 | static int bL_cpufreq_verify_policy(struct cpufreq_policy *policy) | ||
57 | { | ||
58 | u32 cur_cluster = cpu_to_cluster(policy->cpu); | ||
59 | |||
60 | return cpufreq_frequency_table_verify(policy, freq_table[cur_cluster]); | ||
61 | } | ||
62 | |||
63 | /* Set clock frequency */ | ||
64 | static int bL_cpufreq_set_target(struct cpufreq_policy *policy, | ||
65 | unsigned int target_freq, unsigned int relation) | ||
66 | { | ||
67 | struct cpufreq_freqs freqs; | ||
68 | u32 cpu = policy->cpu, freq_tab_idx, cur_cluster; | ||
69 | int ret = 0; | ||
70 | |||
71 | cur_cluster = cpu_to_cluster(policy->cpu); | ||
72 | |||
73 | freqs.old = bL_cpufreq_get(policy->cpu); | ||
74 | |||
75 | /* Determine valid target frequency using freq_table */ | ||
76 | cpufreq_frequency_table_target(policy, freq_table[cur_cluster], | ||
77 | target_freq, relation, &freq_tab_idx); | ||
78 | freqs.new = freq_table[cur_cluster][freq_tab_idx].frequency; | ||
79 | |||
80 | pr_debug("%s: cpu: %d, cluster: %d, oldfreq: %d, target freq: %d, new freq: %d\n", | ||
81 | __func__, cpu, cur_cluster, freqs.old, target_freq, | ||
82 | freqs.new); | ||
83 | |||
84 | if (freqs.old == freqs.new) | ||
85 | return 0; | ||
86 | |||
87 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); | ||
88 | |||
89 | ret = clk_set_rate(clk[cur_cluster], freqs.new * 1000); | ||
90 | if (ret) { | ||
91 | pr_err("clk_set_rate failed: %d\n", ret); | ||
92 | return ret; | ||
93 | } | ||
94 | |||
95 | policy->cur = freqs.new; | ||
96 | |||
97 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); | ||
98 | |||
99 | return ret; | ||
100 | } | ||
101 | |||
102 | static void put_cluster_clk_and_freq_table(struct device *cpu_dev) | ||
103 | { | ||
104 | u32 cluster = cpu_to_cluster(cpu_dev->id); | ||
105 | |||
106 | if (!atomic_dec_return(&cluster_usage[cluster])) { | ||
107 | clk_put(clk[cluster]); | ||
108 | opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); | ||
109 | dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster); | ||
110 | } | ||
111 | } | ||
112 | |||
113 | static int get_cluster_clk_and_freq_table(struct device *cpu_dev) | ||
114 | { | ||
115 | u32 cluster = cpu_to_cluster(cpu_dev->id); | ||
116 | char name[14] = "cpu-cluster."; | ||
117 | int ret; | ||
118 | |||
119 | if (atomic_inc_return(&cluster_usage[cluster]) != 1) | ||
120 | return 0; | ||
121 | |||
122 | ret = arm_bL_ops->init_opp_table(cpu_dev); | ||
123 | if (ret) { | ||
124 | dev_err(cpu_dev, "%s: init_opp_table failed, cpu: %d, err: %d\n", | ||
125 | __func__, cpu_dev->id, ret); | ||
126 | goto atomic_dec; | ||
127 | } | ||
128 | |||
129 | ret = opp_init_cpufreq_table(cpu_dev, &freq_table[cluster]); | ||
130 | if (ret) { | ||
131 | dev_err(cpu_dev, "%s: failed to init cpufreq table, cpu: %d, err: %d\n", | ||
132 | __func__, cpu_dev->id, ret); | ||
133 | goto atomic_dec; | ||
134 | } | ||
135 | |||
136 | name[12] = cluster + '0'; | ||
137 | clk[cluster] = clk_get_sys(name, NULL); | ||
138 | if (!IS_ERR(clk[cluster])) { | ||
139 | dev_dbg(cpu_dev, "%s: clk: %p & freq table: %p, cluster: %d\n", | ||
140 | __func__, clk[cluster], freq_table[cluster], | ||
141 | cluster); | ||
142 | return 0; | ||
143 | } | ||
144 | |||
145 | dev_err(cpu_dev, "%s: Failed to get clk for cpu: %d, cluster: %d\n", | ||
146 | __func__, cpu_dev->id, cluster); | ||
147 | ret = PTR_ERR(clk[cluster]); | ||
148 | opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); | ||
149 | |||
150 | atomic_dec: | ||
151 | atomic_dec(&cluster_usage[cluster]); | ||
152 | dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__, | ||
153 | cluster); | ||
154 | return ret; | ||
155 | } | ||
156 | |||
157 | /* Per-CPU initialization */ | ||
158 | static int bL_cpufreq_init(struct cpufreq_policy *policy) | ||
159 | { | ||
160 | u32 cur_cluster = cpu_to_cluster(policy->cpu); | ||
161 | struct device *cpu_dev; | ||
162 | int ret; | ||
163 | |||
164 | cpu_dev = get_cpu_device(policy->cpu); | ||
165 | if (!cpu_dev) { | ||
166 | pr_err("%s: failed to get cpu%d device\n", __func__, | ||
167 | policy->cpu); | ||
168 | return -ENODEV; | ||
169 | } | ||
170 | |||
171 | ret = get_cluster_clk_and_freq_table(cpu_dev); | ||
172 | if (ret) | ||
173 | return ret; | ||
174 | |||
175 | ret = cpufreq_frequency_table_cpuinfo(policy, freq_table[cur_cluster]); | ||
176 | if (ret) { | ||
177 | dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n", | ||
178 | policy->cpu, cur_cluster); | ||
179 | put_cluster_clk_and_freq_table(cpu_dev); | ||
180 | return ret; | ||
181 | } | ||
182 | |||
183 | cpufreq_frequency_table_get_attr(freq_table[cur_cluster], policy->cpu); | ||
184 | |||
185 | if (arm_bL_ops->get_transition_latency) | ||
186 | policy->cpuinfo.transition_latency = | ||
187 | arm_bL_ops->get_transition_latency(cpu_dev); | ||
188 | else | ||
189 | policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; | ||
190 | |||
191 | policy->cur = bL_cpufreq_get(policy->cpu); | ||
192 | |||
193 | cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu)); | ||
194 | |||
195 | dev_info(cpu_dev, "CPU %d initialized\n", policy->cpu); | ||
196 | return 0; | ||
197 | } | ||
198 | |||
199 | static int bL_cpufreq_exit(struct cpufreq_policy *policy) | ||
200 | { | ||
201 | struct device *cpu_dev; | ||
202 | |||
203 | cpu_dev = get_cpu_device(policy->cpu); | ||
204 | if (!cpu_dev) { | ||
205 | pr_err("%s: failed to get cpu%d device\n", __func__, | ||
206 | policy->cpu); | ||
207 | return -ENODEV; | ||
208 | } | ||
209 | |||
210 | put_cluster_clk_and_freq_table(cpu_dev); | ||
211 | dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu); | ||
212 | |||
213 | return 0; | ||
214 | } | ||
215 | |||
216 | /* Export freq_table to sysfs */ | ||
217 | static struct freq_attr *bL_cpufreq_attr[] = { | ||
218 | &cpufreq_freq_attr_scaling_available_freqs, | ||
219 | NULL, | ||
220 | }; | ||
221 | |||
222 | static struct cpufreq_driver bL_cpufreq_driver = { | ||
223 | .name = "arm-big-little", | ||
224 | .flags = CPUFREQ_STICKY, | ||
225 | .verify = bL_cpufreq_verify_policy, | ||
226 | .target = bL_cpufreq_set_target, | ||
227 | .get = bL_cpufreq_get, | ||
228 | .init = bL_cpufreq_init, | ||
229 | .exit = bL_cpufreq_exit, | ||
230 | .have_governor_per_policy = true, | ||
231 | .attr = bL_cpufreq_attr, | ||
232 | }; | ||
233 | |||
234 | int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops) | ||
235 | { | ||
236 | int ret; | ||
237 | |||
238 | if (arm_bL_ops) { | ||
239 | pr_debug("%s: Already registered: %s, exiting\n", __func__, | ||
240 | arm_bL_ops->name); | ||
241 | return -EBUSY; | ||
242 | } | ||
243 | |||
244 | if (!ops || !strlen(ops->name) || !ops->init_opp_table) { | ||
245 | pr_err("%s: Invalid arm_bL_ops, exiting\n", __func__); | ||
246 | return -ENODEV; | ||
247 | } | ||
248 | |||
249 | arm_bL_ops = ops; | ||
250 | |||
251 | ret = cpufreq_register_driver(&bL_cpufreq_driver); | ||
252 | if (ret) { | ||
253 | pr_info("%s: Failed registering platform driver: %s, err: %d\n", | ||
254 | __func__, ops->name, ret); | ||
255 | arm_bL_ops = NULL; | ||
256 | } else { | ||
257 | pr_info("%s: Registered platform driver: %s\n", __func__, | ||
258 | ops->name); | ||
259 | } | ||
260 | |||
261 | return ret; | ||
262 | } | ||
263 | EXPORT_SYMBOL_GPL(bL_cpufreq_register); | ||
264 | |||
265 | void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops) | ||
266 | { | ||
267 | if (arm_bL_ops != ops) { | ||
268 | pr_err("%s: Registered with: %s, can't unregister, exiting\n", | ||
269 | __func__, arm_bL_ops->name); | ||
270 | return; | ||
271 | } | ||
272 | |||
273 | cpufreq_unregister_driver(&bL_cpufreq_driver); | ||
274 | pr_info("%s: Un-registered platform driver: %s\n", __func__, | ||
275 | arm_bL_ops->name); | ||
276 | arm_bL_ops = NULL; | ||
277 | } | ||
278 | EXPORT_SYMBOL_GPL(bL_cpufreq_unregister); | ||
diff --git a/drivers/cpufreq/arm_big_little.h b/drivers/cpufreq/arm_big_little.h new file mode 100644 index 000000000000..70f18fc12d4a --- /dev/null +++ b/drivers/cpufreq/arm_big_little.h | |||
@@ -0,0 +1,40 @@ | |||
1 | /* | ||
2 | * ARM big.LITTLE platform's CPUFreq header file | ||
3 | * | ||
4 | * Copyright (C) 2013 ARM Ltd. | ||
5 | * Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> | ||
6 | * | ||
7 | * Copyright (C) 2013 Linaro. | ||
8 | * Viresh Kumar <viresh.kumar@linaro.org> | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify | ||
11 | * it under the terms of the GNU General Public License version 2 as | ||
12 | * published by the Free Software Foundation. | ||
13 | * | ||
14 | * This program is distributed "as is" WITHOUT ANY WARRANTY of any | ||
15 | * kind, whether express or implied; without even the implied warranty | ||
16 | * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | * GNU General Public License for more details. | ||
18 | */ | ||
19 | #ifndef CPUFREQ_ARM_BIG_LITTLE_H | ||
20 | #define CPUFREQ_ARM_BIG_LITTLE_H | ||
21 | |||
22 | #include <linux/cpufreq.h> | ||
23 | #include <linux/device.h> | ||
24 | #include <linux/types.h> | ||
25 | |||
26 | struct cpufreq_arm_bL_ops { | ||
27 | char name[CPUFREQ_NAME_LEN]; | ||
28 | int (*get_transition_latency)(struct device *cpu_dev); | ||
29 | |||
30 | /* | ||
31 | * This must set opp table for cpu_dev in a similar way as done by | ||
32 | * of_init_opp_table(). | ||
33 | */ | ||
34 | int (*init_opp_table)(struct device *cpu_dev); | ||
35 | }; | ||
36 | |||
37 | int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops); | ||
38 | void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops); | ||
39 | |||
40 | #endif /* CPUFREQ_ARM_BIG_LITTLE_H */ | ||
diff --git a/drivers/cpufreq/arm_big_little_dt.c b/drivers/cpufreq/arm_big_little_dt.c new file mode 100644 index 000000000000..44be3115375c --- /dev/null +++ b/drivers/cpufreq/arm_big_little_dt.c | |||
@@ -0,0 +1,107 @@ | |||
1 | /* | ||
2 | * Generic big.LITTLE CPUFreq Interface driver | ||
3 | * | ||
4 | * It provides necessary ops to arm_big_little cpufreq driver and gets | ||
5 | * Frequency information from Device Tree. Freq table in DT must be in KHz. | ||
6 | * | ||
7 | * Copyright (C) 2013 Linaro. | ||
8 | * Viresh Kumar <viresh.kumar@linaro.org> | ||
9 | * | ||
10 | * This program is free software; you can redistribute it and/or modify | ||
11 | * it under the terms of the GNU General Public License version 2 as | ||
12 | * published by the Free Software Foundation. | ||
13 | * | ||
14 | * This program is distributed "as is" WITHOUT ANY WARRANTY of any | ||
15 | * kind, whether express or implied; without even the implied warranty | ||
16 | * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
17 | * GNU General Public License for more details. | ||
18 | */ | ||
19 | |||
20 | #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt | ||
21 | |||
22 | #include <linux/cpufreq.h> | ||
23 | #include <linux/device.h> | ||
24 | #include <linux/export.h> | ||
25 | #include <linux/module.h> | ||
26 | #include <linux/of.h> | ||
27 | #include <linux/opp.h> | ||
28 | #include <linux/slab.h> | ||
29 | #include <linux/types.h> | ||
30 | #include "arm_big_little.h" | ||
31 | |||
32 | static int dt_init_opp_table(struct device *cpu_dev) | ||
33 | { | ||
34 | struct device_node *np, *parent; | ||
35 | int count = 0, ret; | ||
36 | |||
37 | parent = of_find_node_by_path("/cpus"); | ||
38 | if (!parent) { | ||
39 | pr_err("failed to find OF /cpus\n"); | ||
40 | return -ENOENT; | ||
41 | } | ||
42 | |||
43 | for_each_child_of_node(parent, np) { | ||
44 | if (count++ != cpu_dev->id) | ||
45 | continue; | ||
46 | if (!of_get_property(np, "operating-points", NULL)) { | ||
47 | ret = -ENODATA; | ||
48 | } else { | ||
49 | cpu_dev->of_node = np; | ||
50 | ret = of_init_opp_table(cpu_dev); | ||
51 | } | ||
52 | of_node_put(np); | ||
53 | of_node_put(parent); | ||
54 | |||
55 | return ret; | ||
56 | } | ||
57 | |||
58 | return -ENODEV; | ||
59 | } | ||
60 | |||
61 | static int dt_get_transition_latency(struct device *cpu_dev) | ||
62 | { | ||
63 | struct device_node *np, *parent; | ||
64 | u32 transition_latency = CPUFREQ_ETERNAL; | ||
65 | int count = 0; | ||
66 | |||
67 | parent = of_find_node_by_path("/cpus"); | ||
68 | if (!parent) { | ||
69 | pr_err("failed to find OF /cpus\n"); | ||
70 | return -ENOENT; | ||
71 | } | ||
72 | |||
73 | for_each_child_of_node(parent, np) { | ||
74 | if (count++ != cpu_dev->id) | ||
75 | continue; | ||
76 | |||
77 | of_property_read_u32(np, "clock-latency", &transition_latency); | ||
78 | of_node_put(np); | ||
79 | of_node_put(parent); | ||
80 | |||
81 | return 0; | ||
82 | } | ||
83 | |||
84 | return -ENODEV; | ||
85 | } | ||
86 | |||
87 | static struct cpufreq_arm_bL_ops dt_bL_ops = { | ||
88 | .name = "dt-bl", | ||
89 | .get_transition_latency = dt_get_transition_latency, | ||
90 | .init_opp_table = dt_init_opp_table, | ||
91 | }; | ||
92 | |||
93 | static int generic_bL_init(void) | ||
94 | { | ||
95 | return bL_cpufreq_register(&dt_bL_ops); | ||
96 | } | ||
97 | module_init(generic_bL_init); | ||
98 | |||
99 | static void generic_bL_exit(void) | ||
100 | { | ||
101 | return bL_cpufreq_unregister(&dt_bL_ops); | ||
102 | } | ||
103 | module_exit(generic_bL_exit); | ||
104 | |||
105 | MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>"); | ||
106 | MODULE_DESCRIPTION("Generic ARM big LITTLE cpufreq driver via DT"); | ||
107 | MODULE_LICENSE("GPL"); | ||
diff --git a/arch/avr32/mach-at32ap/cpufreq.c b/drivers/cpufreq/at32ap-cpufreq.c index 18b765629a0c..654488723cb5 100644 --- a/arch/avr32/mach-at32ap/cpufreq.c +++ b/drivers/cpufreq/at32ap-cpufreq.c | |||
@@ -61,7 +61,6 @@ static int at32_set_target(struct cpufreq_policy *policy, | |||
61 | 61 | ||
62 | freqs.old = at32_get_speed(0); | 62 | freqs.old = at32_get_speed(0); |
63 | freqs.new = (freq + 500) / 1000; | 63 | freqs.new = (freq + 500) / 1000; |
64 | freqs.cpu = 0; | ||
65 | freqs.flags = 0; | 64 | freqs.flags = 0; |
66 | 65 | ||
67 | if (!ref_freq) { | 66 | if (!ref_freq) { |
@@ -69,7 +68,7 @@ static int at32_set_target(struct cpufreq_policy *policy, | |||
69 | loops_per_jiffy_ref = boot_cpu_data.loops_per_jiffy; | 68 | loops_per_jiffy_ref = boot_cpu_data.loops_per_jiffy; |
70 | } | 69 | } |
71 | 70 | ||
72 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 71 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
73 | if (freqs.old < freqs.new) | 72 | if (freqs.old < freqs.new) |
74 | boot_cpu_data.loops_per_jiffy = cpufreq_scale( | 73 | boot_cpu_data.loops_per_jiffy = cpufreq_scale( |
75 | loops_per_jiffy_ref, ref_freq, freqs.new); | 74 | loops_per_jiffy_ref, ref_freq, freqs.new); |
@@ -77,7 +76,7 @@ static int at32_set_target(struct cpufreq_policy *policy, | |||
77 | if (freqs.new < freqs.old) | 76 | if (freqs.new < freqs.old) |
78 | boot_cpu_data.loops_per_jiffy = cpufreq_scale( | 77 | boot_cpu_data.loops_per_jiffy = cpufreq_scale( |
79 | loops_per_jiffy_ref, ref_freq, freqs.new); | 78 | loops_per_jiffy_ref, ref_freq, freqs.new); |
80 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 79 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
81 | 80 | ||
82 | pr_debug("cpufreq: set frequency %lu Hz\n", freq); | 81 | pr_debug("cpufreq: set frequency %lu Hz\n", freq); |
83 | 82 | ||
diff --git a/arch/blackfin/mach-common/cpufreq.c b/drivers/cpufreq/blackfin-cpufreq.c index d88bd31319e6..995511e80bef 100644 --- a/arch/blackfin/mach-common/cpufreq.c +++ b/drivers/cpufreq/blackfin-cpufreq.c | |||
@@ -127,13 +127,13 @@ unsigned long cpu_set_cclk(int cpu, unsigned long new) | |||
127 | } | 127 | } |
128 | #endif | 128 | #endif |
129 | 129 | ||
130 | static int bfin_target(struct cpufreq_policy *poli, | 130 | static int bfin_target(struct cpufreq_policy *policy, |
131 | unsigned int target_freq, unsigned int relation) | 131 | unsigned int target_freq, unsigned int relation) |
132 | { | 132 | { |
133 | #ifndef CONFIG_BF60x | 133 | #ifndef CONFIG_BF60x |
134 | unsigned int plldiv; | 134 | unsigned int plldiv; |
135 | #endif | 135 | #endif |
136 | unsigned int index, cpu; | 136 | unsigned int index; |
137 | unsigned long cclk_hz; | 137 | unsigned long cclk_hz; |
138 | struct cpufreq_freqs freqs; | 138 | struct cpufreq_freqs freqs; |
139 | static unsigned long lpj_ref; | 139 | static unsigned long lpj_ref; |
@@ -144,59 +144,48 @@ static int bfin_target(struct cpufreq_policy *poli, | |||
144 | cycles_t cycles; | 144 | cycles_t cycles; |
145 | #endif | 145 | #endif |
146 | 146 | ||
147 | for_each_online_cpu(cpu) { | 147 | if (cpufreq_frequency_table_target(policy, bfin_freq_table, target_freq, |
148 | struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); | 148 | relation, &index)) |
149 | return -EINVAL; | ||
149 | 150 | ||
150 | if (!policy) | 151 | cclk_hz = bfin_freq_table[index].frequency; |
151 | continue; | ||
152 | 152 | ||
153 | if (cpufreq_frequency_table_target(policy, bfin_freq_table, | 153 | freqs.old = bfin_getfreq_khz(0); |
154 | target_freq, relation, &index)) | 154 | freqs.new = cclk_hz; |
155 | return -EINVAL; | ||
156 | 155 | ||
157 | cclk_hz = bfin_freq_table[index].frequency; | 156 | pr_debug("cpufreq: changing cclk to %lu; target = %u, oldfreq = %u\n", |
157 | cclk_hz, target_freq, freqs.old); | ||
158 | 158 | ||
159 | freqs.old = bfin_getfreq_khz(0); | 159 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
160 | freqs.new = cclk_hz; | ||
161 | freqs.cpu = cpu; | ||
162 | |||
163 | pr_debug("cpufreq: changing cclk to %lu; target = %u, oldfreq = %u\n", | ||
164 | cclk_hz, target_freq, freqs.old); | ||
165 | |||
166 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
167 | if (cpu == CPUFREQ_CPU) { | ||
168 | #ifndef CONFIG_BF60x | 160 | #ifndef CONFIG_BF60x |
169 | plldiv = (bfin_read_PLL_DIV() & SSEL) | | 161 | plldiv = (bfin_read_PLL_DIV() & SSEL) | dpm_state_table[index].csel; |
170 | dpm_state_table[index].csel; | 162 | bfin_write_PLL_DIV(plldiv); |
171 | bfin_write_PLL_DIV(plldiv); | ||
172 | #else | 163 | #else |
173 | ret = cpu_set_cclk(cpu, freqs.new * 1000); | 164 | ret = cpu_set_cclk(policy->cpu, freqs.new * 1000); |
174 | if (ret != 0) { | 165 | if (ret != 0) { |
175 | WARN_ONCE(ret, "cpufreq set freq failed %d\n", ret); | 166 | WARN_ONCE(ret, "cpufreq set freq failed %d\n", ret); |
176 | break; | 167 | return ret; |
177 | } | 168 | } |
178 | #endif | 169 | #endif |
179 | on_each_cpu(bfin_adjust_core_timer, &index, 1); | 170 | on_each_cpu(bfin_adjust_core_timer, &index, 1); |
180 | #if defined(CONFIG_CYCLES_CLOCKSOURCE) | 171 | #if defined(CONFIG_CYCLES_CLOCKSOURCE) |
181 | cycles = get_cycles(); | 172 | cycles = get_cycles(); |
182 | SSYNC(); | 173 | SSYNC(); |
183 | cycles += 10; /* ~10 cycles we lose after get_cycles() */ | 174 | cycles += 10; /* ~10 cycles we lose after get_cycles() */ |
184 | __bfin_cycles_off += | 175 | __bfin_cycles_off += (cycles << __bfin_cycles_mod) - (cycles << index); |
185 | (cycles << __bfin_cycles_mod) - (cycles << index); | 176 | __bfin_cycles_mod = index; |
186 | __bfin_cycles_mod = index; | ||
187 | #endif | 177 | #endif |
188 | if (!lpj_ref_freq) { | 178 | if (!lpj_ref_freq) { |
189 | lpj_ref = loops_per_jiffy; | 179 | lpj_ref = loops_per_jiffy; |
190 | lpj_ref_freq = freqs.old; | 180 | lpj_ref_freq = freqs.old; |
191 | } | ||
192 | if (freqs.new != freqs.old) { | ||
193 | loops_per_jiffy = cpufreq_scale(lpj_ref, | ||
194 | lpj_ref_freq, freqs.new); | ||
195 | } | ||
196 | } | ||
197 | /* TODO: just test case for cycles clock source, remove later */ | ||
198 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
199 | } | 181 | } |
182 | if (freqs.new != freqs.old) { | ||
183 | loops_per_jiffy = cpufreq_scale(lpj_ref, | ||
184 | lpj_ref_freq, freqs.new); | ||
185 | } | ||
186 | |||
187 | /* TODO: just test case for cycles clock source, remove later */ | ||
188 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); | ||
200 | 189 | ||
201 | pr_debug("cpufreq: done\n"); | 190 | pr_debug("cpufreq: done\n"); |
202 | return ret; | 191 | return ret; |
diff --git a/drivers/cpufreq/cpufreq-cpu0.c b/drivers/cpufreq/cpufreq-cpu0.c index 37d23a0f8c56..3ab8294eab04 100644 --- a/drivers/cpufreq/cpufreq-cpu0.c +++ b/drivers/cpufreq/cpufreq-cpu0.c | |||
@@ -44,8 +44,9 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
44 | { | 44 | { |
45 | struct cpufreq_freqs freqs; | 45 | struct cpufreq_freqs freqs; |
46 | struct opp *opp; | 46 | struct opp *opp; |
47 | unsigned long freq_Hz, volt = 0, volt_old = 0, tol = 0; | 47 | unsigned long volt = 0, volt_old = 0, tol = 0; |
48 | unsigned int index, cpu; | 48 | long freq_Hz; |
49 | unsigned int index; | ||
49 | int ret; | 50 | int ret; |
50 | 51 | ||
51 | ret = cpufreq_frequency_table_target(policy, freq_table, target_freq, | 52 | ret = cpufreq_frequency_table_target(policy, freq_table, target_freq, |
@@ -65,10 +66,7 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
65 | if (freqs.old == freqs.new) | 66 | if (freqs.old == freqs.new) |
66 | return 0; | 67 | return 0; |
67 | 68 | ||
68 | for_each_online_cpu(cpu) { | 69 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
69 | freqs.cpu = cpu; | ||
70 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
71 | } | ||
72 | 70 | ||
73 | if (cpu_reg) { | 71 | if (cpu_reg) { |
74 | rcu_read_lock(); | 72 | rcu_read_lock(); |
@@ -76,7 +74,9 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
76 | if (IS_ERR(opp)) { | 74 | if (IS_ERR(opp)) { |
77 | rcu_read_unlock(); | 75 | rcu_read_unlock(); |
78 | pr_err("failed to find OPP for %ld\n", freq_Hz); | 76 | pr_err("failed to find OPP for %ld\n", freq_Hz); |
79 | return PTR_ERR(opp); | 77 | freqs.new = freqs.old; |
78 | ret = PTR_ERR(opp); | ||
79 | goto post_notify; | ||
80 | } | 80 | } |
81 | volt = opp_get_voltage(opp); | 81 | volt = opp_get_voltage(opp); |
82 | rcu_read_unlock(); | 82 | rcu_read_unlock(); |
@@ -94,7 +94,7 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
94 | if (ret) { | 94 | if (ret) { |
95 | pr_err("failed to scale voltage up: %d\n", ret); | 95 | pr_err("failed to scale voltage up: %d\n", ret); |
96 | freqs.new = freqs.old; | 96 | freqs.new = freqs.old; |
97 | return ret; | 97 | goto post_notify; |
98 | } | 98 | } |
99 | } | 99 | } |
100 | 100 | ||
@@ -103,7 +103,8 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
103 | pr_err("failed to set clock rate: %d\n", ret); | 103 | pr_err("failed to set clock rate: %d\n", ret); |
104 | if (cpu_reg) | 104 | if (cpu_reg) |
105 | regulator_set_voltage_tol(cpu_reg, volt_old, tol); | 105 | regulator_set_voltage_tol(cpu_reg, volt_old, tol); |
106 | return ret; | 106 | freqs.new = freqs.old; |
107 | goto post_notify; | ||
107 | } | 108 | } |
108 | 109 | ||
109 | /* scaling down? scale voltage after frequency */ | 110 | /* scaling down? scale voltage after frequency */ |
@@ -113,25 +114,19 @@ static int cpu0_set_target(struct cpufreq_policy *policy, | |||
113 | pr_err("failed to scale voltage down: %d\n", ret); | 114 | pr_err("failed to scale voltage down: %d\n", ret); |
114 | clk_set_rate(cpu_clk, freqs.old * 1000); | 115 | clk_set_rate(cpu_clk, freqs.old * 1000); |
115 | freqs.new = freqs.old; | 116 | freqs.new = freqs.old; |
116 | return ret; | ||
117 | } | 117 | } |
118 | } | 118 | } |
119 | 119 | ||
120 | for_each_online_cpu(cpu) { | 120 | post_notify: |
121 | freqs.cpu = cpu; | 121 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
122 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
123 | } | ||
124 | 122 | ||
125 | return 0; | 123 | return ret; |
126 | } | 124 | } |
127 | 125 | ||
128 | static int cpu0_cpufreq_init(struct cpufreq_policy *policy) | 126 | static int cpu0_cpufreq_init(struct cpufreq_policy *policy) |
129 | { | 127 | { |
130 | int ret; | 128 | int ret; |
131 | 129 | ||
132 | if (policy->cpu != 0) | ||
133 | return -EINVAL; | ||
134 | |||
135 | ret = cpufreq_frequency_table_cpuinfo(policy, freq_table); | 130 | ret = cpufreq_frequency_table_cpuinfo(policy, freq_table); |
136 | if (ret) { | 131 | if (ret) { |
137 | pr_err("invalid frequency table: %d\n", ret); | 132 | pr_err("invalid frequency table: %d\n", ret); |
@@ -262,6 +257,7 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev) | |||
262 | } | 257 | } |
263 | 258 | ||
264 | of_node_put(np); | 259 | of_node_put(np); |
260 | of_node_put(parent); | ||
265 | return 0; | 261 | return 0; |
266 | 262 | ||
267 | out_free_table: | 263 | out_free_table: |
diff --git a/drivers/cpufreq/cpufreq-nforce2.c b/drivers/cpufreq/cpufreq-nforce2.c index 13d311ee08b3..af1542d41440 100644 --- a/drivers/cpufreq/cpufreq-nforce2.c +++ b/drivers/cpufreq/cpufreq-nforce2.c | |||
@@ -263,7 +263,6 @@ static int nforce2_target(struct cpufreq_policy *policy, | |||
263 | 263 | ||
264 | freqs.old = nforce2_get(policy->cpu); | 264 | freqs.old = nforce2_get(policy->cpu); |
265 | freqs.new = target_fsb * fid * 100; | 265 | freqs.new = target_fsb * fid * 100; |
266 | freqs.cpu = 0; /* Only one CPU on nForce2 platforms */ | ||
267 | 266 | ||
268 | if (freqs.old == freqs.new) | 267 | if (freqs.old == freqs.new) |
269 | return 0; | 268 | return 0; |
@@ -271,7 +270,7 @@ static int nforce2_target(struct cpufreq_policy *policy, | |||
271 | pr_debug("Old CPU frequency %d kHz, new %d kHz\n", | 270 | pr_debug("Old CPU frequency %d kHz, new %d kHz\n", |
272 | freqs.old, freqs.new); | 271 | freqs.old, freqs.new); |
273 | 272 | ||
274 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 273 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
275 | 274 | ||
276 | /* Disable IRQs */ | 275 | /* Disable IRQs */ |
277 | /* local_irq_save(flags); */ | 276 | /* local_irq_save(flags); */ |
@@ -286,7 +285,7 @@ static int nforce2_target(struct cpufreq_policy *policy, | |||
286 | /* Enable IRQs */ | 285 | /* Enable IRQs */ |
287 | /* local_irq_restore(flags); */ | 286 | /* local_irq_restore(flags); */ |
288 | 287 | ||
289 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 288 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
290 | 289 | ||
291 | return 0; | 290 | return 0; |
292 | } | 291 | } |
@@ -360,12 +359,10 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy) | |||
360 | min_fsb = NFORCE2_MIN_FSB; | 359 | min_fsb = NFORCE2_MIN_FSB; |
361 | 360 | ||
362 | /* cpuinfo and default policy values */ | 361 | /* cpuinfo and default policy values */ |
363 | policy->cpuinfo.min_freq = min_fsb * fid * 100; | 362 | policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100; |
364 | policy->cpuinfo.max_freq = max_fsb * fid * 100; | 363 | policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100; |
365 | policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; | 364 | policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; |
366 | policy->cur = nforce2_get(policy->cpu); | 365 | policy->cur = nforce2_get(policy->cpu); |
367 | policy->min = policy->cpuinfo.min_freq; | ||
368 | policy->max = policy->cpuinfo.max_freq; | ||
369 | 366 | ||
370 | return 0; | 367 | return 0; |
371 | } | 368 | } |
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index b02824d092e7..a6f65954b0ab 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c | |||
@@ -39,13 +39,13 @@ | |||
39 | * level driver of CPUFreq support, and its spinlock. This lock | 39 | * level driver of CPUFreq support, and its spinlock. This lock |
40 | * also protects the cpufreq_cpu_data array. | 40 | * also protects the cpufreq_cpu_data array. |
41 | */ | 41 | */ |
42 | static struct cpufreq_driver *cpufreq_driver; | 42 | static struct cpufreq_driver __rcu *cpufreq_driver; |
43 | static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); | 43 | static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); |
44 | #ifdef CONFIG_HOTPLUG_CPU | 44 | #ifdef CONFIG_HOTPLUG_CPU |
45 | /* This one keeps track of the previously set governor of a removed CPU */ | 45 | /* This one keeps track of the previously set governor of a removed CPU */ |
46 | static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); | 46 | static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); |
47 | #endif | 47 | #endif |
48 | static DEFINE_SPINLOCK(cpufreq_driver_lock); | 48 | static DEFINE_RWLOCK(cpufreq_driver_lock); |
49 | 49 | ||
50 | /* | 50 | /* |
51 | * cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure | 51 | * cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure |
@@ -128,23 +128,36 @@ void disable_cpufreq(void) | |||
128 | static LIST_HEAD(cpufreq_governor_list); | 128 | static LIST_HEAD(cpufreq_governor_list); |
129 | static DEFINE_MUTEX(cpufreq_governor_mutex); | 129 | static DEFINE_MUTEX(cpufreq_governor_mutex); |
130 | 130 | ||
131 | bool have_governor_per_policy(void) | ||
132 | { | ||
133 | bool have_governor_per_policy; | ||
134 | rcu_read_lock(); | ||
135 | have_governor_per_policy = | ||
136 | rcu_dereference(cpufreq_driver)->have_governor_per_policy; | ||
137 | rcu_read_unlock(); | ||
138 | return have_governor_per_policy; | ||
139 | } | ||
140 | |||
131 | static struct cpufreq_policy *__cpufreq_cpu_get(unsigned int cpu, bool sysfs) | 141 | static struct cpufreq_policy *__cpufreq_cpu_get(unsigned int cpu, bool sysfs) |
132 | { | 142 | { |
133 | struct cpufreq_policy *data; | 143 | struct cpufreq_policy *data; |
144 | struct cpufreq_driver *driver; | ||
134 | unsigned long flags; | 145 | unsigned long flags; |
135 | 146 | ||
136 | if (cpu >= nr_cpu_ids) | 147 | if (cpu >= nr_cpu_ids) |
137 | goto err_out; | 148 | goto err_out; |
138 | 149 | ||
139 | /* get the cpufreq driver */ | 150 | /* get the cpufreq driver */ |
140 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 151 | rcu_read_lock(); |
152 | driver = rcu_dereference(cpufreq_driver); | ||
141 | 153 | ||
142 | if (!cpufreq_driver) | 154 | if (!driver) |
143 | goto err_out_unlock; | 155 | goto err_out_unlock; |
144 | 156 | ||
145 | if (!try_module_get(cpufreq_driver->owner)) | 157 | if (!try_module_get(driver->owner)) |
146 | goto err_out_unlock; | 158 | goto err_out_unlock; |
147 | 159 | ||
160 | read_lock_irqsave(&cpufreq_driver_lock, flags); | ||
148 | 161 | ||
149 | /* get the CPU */ | 162 | /* get the CPU */ |
150 | data = per_cpu(cpufreq_cpu_data, cpu); | 163 | data = per_cpu(cpufreq_cpu_data, cpu); |
@@ -155,13 +168,15 @@ static struct cpufreq_policy *__cpufreq_cpu_get(unsigned int cpu, bool sysfs) | |||
155 | if (!sysfs && !kobject_get(&data->kobj)) | 168 | if (!sysfs && !kobject_get(&data->kobj)) |
156 | goto err_out_put_module; | 169 | goto err_out_put_module; |
157 | 170 | ||
158 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 171 | read_unlock_irqrestore(&cpufreq_driver_lock, flags); |
172 | rcu_read_unlock(); | ||
159 | return data; | 173 | return data; |
160 | 174 | ||
161 | err_out_put_module: | 175 | err_out_put_module: |
162 | module_put(cpufreq_driver->owner); | 176 | module_put(driver->owner); |
177 | read_unlock_irqrestore(&cpufreq_driver_lock, flags); | ||
163 | err_out_unlock: | 178 | err_out_unlock: |
164 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 179 | rcu_read_unlock(); |
165 | err_out: | 180 | err_out: |
166 | return NULL; | 181 | return NULL; |
167 | } | 182 | } |
@@ -184,7 +199,9 @@ static void __cpufreq_cpu_put(struct cpufreq_policy *data, bool sysfs) | |||
184 | { | 199 | { |
185 | if (!sysfs) | 200 | if (!sysfs) |
186 | kobject_put(&data->kobj); | 201 | kobject_put(&data->kobj); |
187 | module_put(cpufreq_driver->owner); | 202 | rcu_read_lock(); |
203 | module_put(rcu_dereference(cpufreq_driver)->owner); | ||
204 | rcu_read_unlock(); | ||
188 | } | 205 | } |
189 | 206 | ||
190 | void cpufreq_cpu_put(struct cpufreq_policy *data) | 207 | void cpufreq_cpu_put(struct cpufreq_policy *data) |
@@ -244,32 +261,20 @@ static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) | |||
244 | #endif | 261 | #endif |
245 | 262 | ||
246 | 263 | ||
247 | /** | 264 | void __cpufreq_notify_transition(struct cpufreq_policy *policy, |
248 | * cpufreq_notify_transition - call notifier chain and adjust_jiffies | 265 | struct cpufreq_freqs *freqs, unsigned int state) |
249 | * on frequency transition. | ||
250 | * | ||
251 | * This function calls the transition notifiers and the "adjust_jiffies" | ||
252 | * function. It is called twice on all CPU frequency changes that have | ||
253 | * external effects. | ||
254 | */ | ||
255 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state) | ||
256 | { | 266 | { |
257 | struct cpufreq_policy *policy; | ||
258 | unsigned long flags; | ||
259 | |||
260 | BUG_ON(irqs_disabled()); | 267 | BUG_ON(irqs_disabled()); |
261 | 268 | ||
262 | if (cpufreq_disabled()) | 269 | if (cpufreq_disabled()) |
263 | return; | 270 | return; |
264 | 271 | ||
265 | freqs->flags = cpufreq_driver->flags; | 272 | rcu_read_lock(); |
273 | freqs->flags = rcu_dereference(cpufreq_driver)->flags; | ||
274 | rcu_read_unlock(); | ||
266 | pr_debug("notification %u of frequency transition to %u kHz\n", | 275 | pr_debug("notification %u of frequency transition to %u kHz\n", |
267 | state, freqs->new); | 276 | state, freqs->new); |
268 | 277 | ||
269 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | ||
270 | policy = per_cpu(cpufreq_cpu_data, freqs->cpu); | ||
271 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | ||
272 | |||
273 | switch (state) { | 278 | switch (state) { |
274 | 279 | ||
275 | case CPUFREQ_PRECHANGE: | 280 | case CPUFREQ_PRECHANGE: |
@@ -277,7 +282,7 @@ void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state) | |||
277 | * which is not equal to what the cpufreq core thinks is | 282 | * which is not equal to what the cpufreq core thinks is |
278 | * "old frequency". | 283 | * "old frequency". |
279 | */ | 284 | */ |
280 | if (!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { | 285 | if (!(freqs->flags & CPUFREQ_CONST_LOOPS)) { |
281 | if ((policy) && (policy->cpu == freqs->cpu) && | 286 | if ((policy) && (policy->cpu == freqs->cpu) && |
282 | (policy->cur) && (policy->cur != freqs->old)) { | 287 | (policy->cur) && (policy->cur != freqs->old)) { |
283 | pr_debug("Warning: CPU frequency is" | 288 | pr_debug("Warning: CPU frequency is" |
@@ -303,6 +308,20 @@ void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state) | |||
303 | break; | 308 | break; |
304 | } | 309 | } |
305 | } | 310 | } |
311 | /** | ||
312 | * cpufreq_notify_transition - call notifier chain and adjust_jiffies | ||
313 | * on frequency transition. | ||
314 | * | ||
315 | * This function calls the transition notifiers and the "adjust_jiffies" | ||
316 | * function. It is called twice on all CPU frequency changes that have | ||
317 | * external effects. | ||
318 | */ | ||
319 | void cpufreq_notify_transition(struct cpufreq_policy *policy, | ||
320 | struct cpufreq_freqs *freqs, unsigned int state) | ||
321 | { | ||
322 | for_each_cpu(freqs->cpu, policy->cpus) | ||
323 | __cpufreq_notify_transition(policy, freqs, state); | ||
324 | } | ||
306 | EXPORT_SYMBOL_GPL(cpufreq_notify_transition); | 325 | EXPORT_SYMBOL_GPL(cpufreq_notify_transition); |
307 | 326 | ||
308 | 327 | ||
@@ -329,11 +348,21 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy, | |||
329 | struct cpufreq_governor **governor) | 348 | struct cpufreq_governor **governor) |
330 | { | 349 | { |
331 | int err = -EINVAL; | 350 | int err = -EINVAL; |
332 | 351 | struct cpufreq_driver *driver; | |
333 | if (!cpufreq_driver) | 352 | bool has_setpolicy; |
353 | bool has_target; | ||
354 | |||
355 | rcu_read_lock(); | ||
356 | driver = rcu_dereference(cpufreq_driver); | ||
357 | if (!driver) { | ||
358 | rcu_read_unlock(); | ||
334 | goto out; | 359 | goto out; |
360 | } | ||
361 | has_setpolicy = driver->setpolicy ? true : false; | ||
362 | has_target = driver->target ? true : false; | ||
363 | rcu_read_unlock(); | ||
335 | 364 | ||
336 | if (cpufreq_driver->setpolicy) { | 365 | if (has_setpolicy) { |
337 | if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { | 366 | if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { |
338 | *policy = CPUFREQ_POLICY_PERFORMANCE; | 367 | *policy = CPUFREQ_POLICY_PERFORMANCE; |
339 | err = 0; | 368 | err = 0; |
@@ -342,7 +371,7 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy, | |||
342 | *policy = CPUFREQ_POLICY_POWERSAVE; | 371 | *policy = CPUFREQ_POLICY_POWERSAVE; |
343 | err = 0; | 372 | err = 0; |
344 | } | 373 | } |
345 | } else if (cpufreq_driver->target) { | 374 | } else if (has_target) { |
346 | struct cpufreq_governor *t; | 375 | struct cpufreq_governor *t; |
347 | 376 | ||
348 | mutex_lock(&cpufreq_governor_mutex); | 377 | mutex_lock(&cpufreq_governor_mutex); |
@@ -493,7 +522,12 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy, | |||
493 | */ | 522 | */ |
494 | static ssize_t show_scaling_driver(struct cpufreq_policy *policy, char *buf) | 523 | static ssize_t show_scaling_driver(struct cpufreq_policy *policy, char *buf) |
495 | { | 524 | { |
496 | return scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n", cpufreq_driver->name); | 525 | ssize_t size; |
526 | rcu_read_lock(); | ||
527 | size = scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n", | ||
528 | rcu_dereference(cpufreq_driver)->name); | ||
529 | rcu_read_unlock(); | ||
530 | return size; | ||
497 | } | 531 | } |
498 | 532 | ||
499 | /** | 533 | /** |
@@ -505,10 +539,13 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy, | |||
505 | ssize_t i = 0; | 539 | ssize_t i = 0; |
506 | struct cpufreq_governor *t; | 540 | struct cpufreq_governor *t; |
507 | 541 | ||
508 | if (!cpufreq_driver->target) { | 542 | rcu_read_lock(); |
543 | if (!rcu_dereference(cpufreq_driver)->target) { | ||
544 | rcu_read_unlock(); | ||
509 | i += sprintf(buf, "performance powersave"); | 545 | i += sprintf(buf, "performance powersave"); |
510 | goto out; | 546 | goto out; |
511 | } | 547 | } |
548 | rcu_read_unlock(); | ||
512 | 549 | ||
513 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) { | 550 | list_for_each_entry(t, &cpufreq_governor_list, governor_list) { |
514 | if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) | 551 | if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) |
@@ -586,9 +623,15 @@ static ssize_t show_scaling_setspeed(struct cpufreq_policy *policy, char *buf) | |||
586 | static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf) | 623 | static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf) |
587 | { | 624 | { |
588 | unsigned int limit; | 625 | unsigned int limit; |
626 | int (*bios_limit)(int cpu, unsigned int *limit); | ||
589 | int ret; | 627 | int ret; |
590 | if (cpufreq_driver->bios_limit) { | 628 | |
591 | ret = cpufreq_driver->bios_limit(policy->cpu, &limit); | 629 | rcu_read_lock(); |
630 | bios_limit = rcu_dereference(cpufreq_driver)->bios_limit; | ||
631 | rcu_read_unlock(); | ||
632 | |||
633 | if (bios_limit) { | ||
634 | ret = bios_limit(policy->cpu, &limit); | ||
592 | if (!ret) | 635 | if (!ret) |
593 | return sprintf(buf, "%u\n", limit); | 636 | return sprintf(buf, "%u\n", limit); |
594 | } | 637 | } |
@@ -731,6 +774,7 @@ static int cpufreq_add_dev_interface(unsigned int cpu, | |||
731 | { | 774 | { |
732 | struct cpufreq_policy new_policy; | 775 | struct cpufreq_policy new_policy; |
733 | struct freq_attr **drv_attr; | 776 | struct freq_attr **drv_attr; |
777 | struct cpufreq_driver *driver; | ||
734 | unsigned long flags; | 778 | unsigned long flags; |
735 | int ret = 0; | 779 | int ret = 0; |
736 | unsigned int j; | 780 | unsigned int j; |
@@ -742,35 +786,38 @@ static int cpufreq_add_dev_interface(unsigned int cpu, | |||
742 | return ret; | 786 | return ret; |
743 | 787 | ||
744 | /* set up files for this cpu device */ | 788 | /* set up files for this cpu device */ |
745 | drv_attr = cpufreq_driver->attr; | 789 | rcu_read_lock(); |
790 | driver = rcu_dereference(cpufreq_driver); | ||
791 | drv_attr = driver->attr; | ||
746 | while ((drv_attr) && (*drv_attr)) { | 792 | while ((drv_attr) && (*drv_attr)) { |
747 | ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); | 793 | ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); |
748 | if (ret) | 794 | if (ret) |
749 | goto err_out_kobj_put; | 795 | goto err_out_unlock; |
750 | drv_attr++; | 796 | drv_attr++; |
751 | } | 797 | } |
752 | if (cpufreq_driver->get) { | 798 | if (driver->get) { |
753 | ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr); | 799 | ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr); |
754 | if (ret) | 800 | if (ret) |
755 | goto err_out_kobj_put; | 801 | goto err_out_unlock; |
756 | } | 802 | } |
757 | if (cpufreq_driver->target) { | 803 | if (driver->target) { |
758 | ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); | 804 | ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); |
759 | if (ret) | 805 | if (ret) |
760 | goto err_out_kobj_put; | 806 | goto err_out_unlock; |
761 | } | 807 | } |
762 | if (cpufreq_driver->bios_limit) { | 808 | if (driver->bios_limit) { |
763 | ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); | 809 | ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); |
764 | if (ret) | 810 | if (ret) |
765 | goto err_out_kobj_put; | 811 | goto err_out_unlock; |
766 | } | 812 | } |
813 | rcu_read_unlock(); | ||
767 | 814 | ||
768 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 815 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
769 | for_each_cpu(j, policy->cpus) { | 816 | for_each_cpu(j, policy->cpus) { |
770 | per_cpu(cpufreq_cpu_data, j) = policy; | 817 | per_cpu(cpufreq_cpu_data, j) = policy; |
771 | per_cpu(cpufreq_policy_cpu, j) = policy->cpu; | 818 | per_cpu(cpufreq_policy_cpu, j) = policy->cpu; |
772 | } | 819 | } |
773 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 820 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
774 | 821 | ||
775 | ret = cpufreq_add_dev_symlink(cpu, policy); | 822 | ret = cpufreq_add_dev_symlink(cpu, policy); |
776 | if (ret) | 823 | if (ret) |
@@ -786,12 +833,20 @@ static int cpufreq_add_dev_interface(unsigned int cpu, | |||
786 | policy->user_policy.governor = policy->governor; | 833 | policy->user_policy.governor = policy->governor; |
787 | 834 | ||
788 | if (ret) { | 835 | if (ret) { |
836 | int (*exit)(struct cpufreq_policy *policy); | ||
837 | |||
789 | pr_debug("setting policy failed\n"); | 838 | pr_debug("setting policy failed\n"); |
790 | if (cpufreq_driver->exit) | 839 | rcu_read_lock(); |
791 | cpufreq_driver->exit(policy); | 840 | exit = rcu_dereference(cpufreq_driver)->exit; |
841 | rcu_read_unlock(); | ||
842 | if (exit) | ||
843 | exit(policy); | ||
844 | |||
792 | } | 845 | } |
793 | return ret; | 846 | return ret; |
794 | 847 | ||
848 | err_out_unlock: | ||
849 | rcu_read_unlock(); | ||
795 | err_out_kobj_put: | 850 | err_out_kobj_put: |
796 | kobject_put(&policy->kobj); | 851 | kobject_put(&policy->kobj); |
797 | wait_for_completion(&policy->kobj_unregister); | 852 | wait_for_completion(&policy->kobj_unregister); |
@@ -803,27 +858,34 @@ static int cpufreq_add_policy_cpu(unsigned int cpu, unsigned int sibling, | |||
803 | struct device *dev) | 858 | struct device *dev) |
804 | { | 859 | { |
805 | struct cpufreq_policy *policy; | 860 | struct cpufreq_policy *policy; |
806 | int ret = 0; | 861 | int ret = 0, has_target = 0; |
807 | unsigned long flags; | 862 | unsigned long flags; |
808 | 863 | ||
809 | policy = cpufreq_cpu_get(sibling); | 864 | policy = cpufreq_cpu_get(sibling); |
810 | WARN_ON(!policy); | 865 | WARN_ON(!policy); |
811 | 866 | ||
812 | __cpufreq_governor(policy, CPUFREQ_GOV_STOP); | 867 | rcu_read_lock(); |
868 | has_target = !!rcu_dereference(cpufreq_driver)->target; | ||
869 | rcu_read_unlock(); | ||
870 | |||
871 | if (has_target) | ||
872 | __cpufreq_governor(policy, CPUFREQ_GOV_STOP); | ||
813 | 873 | ||
814 | lock_policy_rwsem_write(sibling); | 874 | lock_policy_rwsem_write(sibling); |
815 | 875 | ||
816 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 876 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
817 | 877 | ||
818 | cpumask_set_cpu(cpu, policy->cpus); | 878 | cpumask_set_cpu(cpu, policy->cpus); |
819 | per_cpu(cpufreq_policy_cpu, cpu) = policy->cpu; | 879 | per_cpu(cpufreq_policy_cpu, cpu) = policy->cpu; |
820 | per_cpu(cpufreq_cpu_data, cpu) = policy; | 880 | per_cpu(cpufreq_cpu_data, cpu) = policy; |
821 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 881 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
822 | 882 | ||
823 | unlock_policy_rwsem_write(sibling); | 883 | unlock_policy_rwsem_write(sibling); |
824 | 884 | ||
825 | __cpufreq_governor(policy, CPUFREQ_GOV_START); | 885 | if (has_target) { |
826 | __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); | 886 | __cpufreq_governor(policy, CPUFREQ_GOV_START); |
887 | __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); | ||
888 | } | ||
827 | 889 | ||
828 | ret = sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq"); | 890 | ret = sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq"); |
829 | if (ret) { | 891 | if (ret) { |
@@ -849,6 +911,8 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) | |||
849 | unsigned int j, cpu = dev->id; | 911 | unsigned int j, cpu = dev->id; |
850 | int ret = -ENOMEM; | 912 | int ret = -ENOMEM; |
851 | struct cpufreq_policy *policy; | 913 | struct cpufreq_policy *policy; |
914 | struct cpufreq_driver *driver; | ||
915 | int (*init)(struct cpufreq_policy *policy); | ||
852 | unsigned long flags; | 916 | unsigned long flags; |
853 | #ifdef CONFIG_HOTPLUG_CPU | 917 | #ifdef CONFIG_HOTPLUG_CPU |
854 | struct cpufreq_governor *gov; | 918 | struct cpufreq_governor *gov; |
@@ -871,22 +935,27 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) | |||
871 | 935 | ||
872 | #ifdef CONFIG_HOTPLUG_CPU | 936 | #ifdef CONFIG_HOTPLUG_CPU |
873 | /* Check if this cpu was hot-unplugged earlier and has siblings */ | 937 | /* Check if this cpu was hot-unplugged earlier and has siblings */ |
874 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 938 | read_lock_irqsave(&cpufreq_driver_lock, flags); |
875 | for_each_online_cpu(sibling) { | 939 | for_each_online_cpu(sibling) { |
876 | struct cpufreq_policy *cp = per_cpu(cpufreq_cpu_data, sibling); | 940 | struct cpufreq_policy *cp = per_cpu(cpufreq_cpu_data, sibling); |
877 | if (cp && cpumask_test_cpu(cpu, cp->related_cpus)) { | 941 | if (cp && cpumask_test_cpu(cpu, cp->related_cpus)) { |
878 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 942 | read_unlock_irqrestore(&cpufreq_driver_lock, flags); |
879 | return cpufreq_add_policy_cpu(cpu, sibling, dev); | 943 | return cpufreq_add_policy_cpu(cpu, sibling, dev); |
880 | } | 944 | } |
881 | } | 945 | } |
882 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 946 | read_unlock_irqrestore(&cpufreq_driver_lock, flags); |
883 | #endif | 947 | #endif |
884 | #endif | 948 | #endif |
885 | 949 | ||
886 | if (!try_module_get(cpufreq_driver->owner)) { | 950 | rcu_read_lock(); |
951 | driver = rcu_dereference(cpufreq_driver); | ||
952 | if (!try_module_get(driver->owner)) { | ||
953 | rcu_read_unlock(); | ||
887 | ret = -EINVAL; | 954 | ret = -EINVAL; |
888 | goto module_out; | 955 | goto module_out; |
889 | } | 956 | } |
957 | init = driver->init; | ||
958 | rcu_read_unlock(); | ||
890 | 959 | ||
891 | policy = kzalloc(sizeof(struct cpufreq_policy), GFP_KERNEL); | 960 | policy = kzalloc(sizeof(struct cpufreq_policy), GFP_KERNEL); |
892 | if (!policy) | 961 | if (!policy) |
@@ -911,7 +980,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) | |||
911 | /* call driver. From then on the cpufreq must be able | 980 | /* call driver. From then on the cpufreq must be able |
912 | * to accept all calls to ->verify and ->setpolicy for this CPU | 981 | * to accept all calls to ->verify and ->setpolicy for this CPU |
913 | */ | 982 | */ |
914 | ret = cpufreq_driver->init(policy); | 983 | ret = init(policy); |
915 | if (ret) { | 984 | if (ret) { |
916 | pr_debug("initialization failed\n"); | 985 | pr_debug("initialization failed\n"); |
917 | goto err_set_policy_cpu; | 986 | goto err_set_policy_cpu; |
@@ -946,16 +1015,18 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) | |||
946 | goto err_out_unregister; | 1015 | goto err_out_unregister; |
947 | 1016 | ||
948 | kobject_uevent(&policy->kobj, KOBJ_ADD); | 1017 | kobject_uevent(&policy->kobj, KOBJ_ADD); |
949 | module_put(cpufreq_driver->owner); | 1018 | rcu_read_lock(); |
1019 | module_put(rcu_dereference(cpufreq_driver)->owner); | ||
1020 | rcu_read_unlock(); | ||
950 | pr_debug("initialization complete\n"); | 1021 | pr_debug("initialization complete\n"); |
951 | 1022 | ||
952 | return 0; | 1023 | return 0; |
953 | 1024 | ||
954 | err_out_unregister: | 1025 | err_out_unregister: |
955 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1026 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
956 | for_each_cpu(j, policy->cpus) | 1027 | for_each_cpu(j, policy->cpus) |
957 | per_cpu(cpufreq_cpu_data, j) = NULL; | 1028 | per_cpu(cpufreq_cpu_data, j) = NULL; |
958 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1029 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
959 | 1030 | ||
960 | kobject_put(&policy->kobj); | 1031 | kobject_put(&policy->kobj); |
961 | wait_for_completion(&policy->kobj_unregister); | 1032 | wait_for_completion(&policy->kobj_unregister); |
@@ -968,7 +1039,9 @@ err_free_cpumask: | |||
968 | err_free_policy: | 1039 | err_free_policy: |
969 | kfree(policy); | 1040 | kfree(policy); |
970 | nomem_out: | 1041 | nomem_out: |
971 | module_put(cpufreq_driver->owner); | 1042 | rcu_read_lock(); |
1043 | module_put(rcu_dereference(cpufreq_driver)->owner); | ||
1044 | rcu_read_unlock(); | ||
972 | module_out: | 1045 | module_out: |
973 | return ret; | 1046 | return ret; |
974 | } | 1047 | } |
@@ -1002,36 +1075,46 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif | |||
1002 | unsigned int cpu = dev->id, ret, cpus; | 1075 | unsigned int cpu = dev->id, ret, cpus; |
1003 | unsigned long flags; | 1076 | unsigned long flags; |
1004 | struct cpufreq_policy *data; | 1077 | struct cpufreq_policy *data; |
1078 | struct cpufreq_driver *driver; | ||
1005 | struct kobject *kobj; | 1079 | struct kobject *kobj; |
1006 | struct completion *cmp; | 1080 | struct completion *cmp; |
1007 | struct device *cpu_dev; | 1081 | struct device *cpu_dev; |
1082 | bool has_target; | ||
1083 | int (*exit)(struct cpufreq_policy *policy); | ||
1008 | 1084 | ||
1009 | pr_debug("%s: unregistering CPU %u\n", __func__, cpu); | 1085 | pr_debug("%s: unregistering CPU %u\n", __func__, cpu); |
1010 | 1086 | ||
1011 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1087 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
1012 | 1088 | ||
1013 | data = per_cpu(cpufreq_cpu_data, cpu); | 1089 | data = per_cpu(cpufreq_cpu_data, cpu); |
1014 | per_cpu(cpufreq_cpu_data, cpu) = NULL; | 1090 | per_cpu(cpufreq_cpu_data, cpu) = NULL; |
1015 | 1091 | ||
1016 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1092 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1017 | 1093 | ||
1018 | if (!data) { | 1094 | if (!data) { |
1019 | pr_debug("%s: No cpu_data found\n", __func__); | 1095 | pr_debug("%s: No cpu_data found\n", __func__); |
1020 | return -EINVAL; | 1096 | return -EINVAL; |
1021 | } | 1097 | } |
1022 | 1098 | ||
1023 | if (cpufreq_driver->target) | 1099 | rcu_read_lock(); |
1100 | driver = rcu_dereference(cpufreq_driver); | ||
1101 | has_target = driver->target ? true : false; | ||
1102 | exit = driver->exit; | ||
1103 | if (has_target) | ||
1024 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); | 1104 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); |
1025 | 1105 | ||
1026 | #ifdef CONFIG_HOTPLUG_CPU | 1106 | #ifdef CONFIG_HOTPLUG_CPU |
1027 | if (!cpufreq_driver->setpolicy) | 1107 | if (!driver->setpolicy) |
1028 | strncpy(per_cpu(cpufreq_cpu_governor, cpu), | 1108 | strncpy(per_cpu(cpufreq_cpu_governor, cpu), |
1029 | data->governor->name, CPUFREQ_NAME_LEN); | 1109 | data->governor->name, CPUFREQ_NAME_LEN); |
1030 | #endif | 1110 | #endif |
1111 | rcu_read_unlock(); | ||
1031 | 1112 | ||
1032 | WARN_ON(lock_policy_rwsem_write(cpu)); | 1113 | WARN_ON(lock_policy_rwsem_write(cpu)); |
1033 | cpus = cpumask_weight(data->cpus); | 1114 | cpus = cpumask_weight(data->cpus); |
1034 | cpumask_clear_cpu(cpu, data->cpus); | 1115 | |
1116 | if (cpus > 1) | ||
1117 | cpumask_clear_cpu(cpu, data->cpus); | ||
1035 | unlock_policy_rwsem_write(cpu); | 1118 | unlock_policy_rwsem_write(cpu); |
1036 | 1119 | ||
1037 | if (cpu != data->cpu) { | 1120 | if (cpu != data->cpu) { |
@@ -1047,9 +1130,9 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif | |||
1047 | WARN_ON(lock_policy_rwsem_write(cpu)); | 1130 | WARN_ON(lock_policy_rwsem_write(cpu)); |
1048 | cpumask_set_cpu(cpu, data->cpus); | 1131 | cpumask_set_cpu(cpu, data->cpus); |
1049 | 1132 | ||
1050 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 1133 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
1051 | per_cpu(cpufreq_cpu_data, cpu) = data; | 1134 | per_cpu(cpufreq_cpu_data, cpu) = data; |
1052 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 1135 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1053 | 1136 | ||
1054 | unlock_policy_rwsem_write(cpu); | 1137 | unlock_policy_rwsem_write(cpu); |
1055 | 1138 | ||
@@ -1070,6 +1153,9 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif | |||
1070 | 1153 | ||
1071 | /* If cpu is last user of policy, free policy */ | 1154 | /* If cpu is last user of policy, free policy */ |
1072 | if (cpus == 1) { | 1155 | if (cpus == 1) { |
1156 | if (has_target) | ||
1157 | __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT); | ||
1158 | |||
1073 | lock_policy_rwsem_read(cpu); | 1159 | lock_policy_rwsem_read(cpu); |
1074 | kobj = &data->kobj; | 1160 | kobj = &data->kobj; |
1075 | cmp = &data->kobj_unregister; | 1161 | cmp = &data->kobj_unregister; |
@@ -1084,13 +1170,13 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif | |||
1084 | wait_for_completion(cmp); | 1170 | wait_for_completion(cmp); |
1085 | pr_debug("wait complete\n"); | 1171 | pr_debug("wait complete\n"); |
1086 | 1172 | ||
1087 | if (cpufreq_driver->exit) | 1173 | if (exit) |
1088 | cpufreq_driver->exit(data); | 1174 | exit(data); |
1089 | 1175 | ||
1090 | free_cpumask_var(data->related_cpus); | 1176 | free_cpumask_var(data->related_cpus); |
1091 | free_cpumask_var(data->cpus); | 1177 | free_cpumask_var(data->cpus); |
1092 | kfree(data); | 1178 | kfree(data); |
1093 | } else if (cpufreq_driver->target) { | 1179 | } else if (has_target) { |
1094 | __cpufreq_governor(data, CPUFREQ_GOV_START); | 1180 | __cpufreq_governor(data, CPUFREQ_GOV_START); |
1095 | __cpufreq_governor(data, CPUFREQ_GOV_LIMITS); | 1181 | __cpufreq_governor(data, CPUFREQ_GOV_LIMITS); |
1096 | } | 1182 | } |
@@ -1134,16 +1220,23 @@ static void handle_update(struct work_struct *work) | |||
1134 | static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, | 1220 | static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, |
1135 | unsigned int new_freq) | 1221 | unsigned int new_freq) |
1136 | { | 1222 | { |
1223 | struct cpufreq_policy *policy; | ||
1137 | struct cpufreq_freqs freqs; | 1224 | struct cpufreq_freqs freqs; |
1225 | unsigned long flags; | ||
1226 | |||
1138 | 1227 | ||
1139 | pr_debug("Warning: CPU frequency out of sync: cpufreq and timing " | 1228 | pr_debug("Warning: CPU frequency out of sync: cpufreq and timing " |
1140 | "core thinks of %u, is %u kHz.\n", old_freq, new_freq); | 1229 | "core thinks of %u, is %u kHz.\n", old_freq, new_freq); |
1141 | 1230 | ||
1142 | freqs.cpu = cpu; | ||
1143 | freqs.old = old_freq; | 1231 | freqs.old = old_freq; |
1144 | freqs.new = new_freq; | 1232 | freqs.new = new_freq; |
1145 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 1233 | |
1146 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 1234 | read_lock_irqsave(&cpufreq_driver_lock, flags); |
1235 | policy = per_cpu(cpufreq_cpu_data, cpu); | ||
1236 | read_unlock_irqrestore(&cpufreq_driver_lock, flags); | ||
1237 | |||
1238 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); | ||
1239 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); | ||
1147 | } | 1240 | } |
1148 | 1241 | ||
1149 | 1242 | ||
@@ -1157,10 +1250,18 @@ static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, | |||
1157 | unsigned int cpufreq_quick_get(unsigned int cpu) | 1250 | unsigned int cpufreq_quick_get(unsigned int cpu) |
1158 | { | 1251 | { |
1159 | struct cpufreq_policy *policy; | 1252 | struct cpufreq_policy *policy; |
1253 | struct cpufreq_driver *driver; | ||
1254 | unsigned int (*get)(unsigned int cpu); | ||
1160 | unsigned int ret_freq = 0; | 1255 | unsigned int ret_freq = 0; |
1161 | 1256 | ||
1162 | if (cpufreq_driver && cpufreq_driver->setpolicy && cpufreq_driver->get) | 1257 | rcu_read_lock(); |
1163 | return cpufreq_driver->get(cpu); | 1258 | driver = rcu_dereference(cpufreq_driver); |
1259 | if (driver && driver->setpolicy && driver->get) { | ||
1260 | get = driver->get; | ||
1261 | rcu_read_unlock(); | ||
1262 | return get(cpu); | ||
1263 | } | ||
1264 | rcu_read_unlock(); | ||
1164 | 1265 | ||
1165 | policy = cpufreq_cpu_get(cpu); | 1266 | policy = cpufreq_cpu_get(cpu); |
1166 | if (policy) { | 1267 | if (policy) { |
@@ -1196,15 +1297,26 @@ EXPORT_SYMBOL(cpufreq_quick_get_max); | |||
1196 | static unsigned int __cpufreq_get(unsigned int cpu) | 1297 | static unsigned int __cpufreq_get(unsigned int cpu) |
1197 | { | 1298 | { |
1198 | struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); | 1299 | struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); |
1300 | struct cpufreq_driver *driver; | ||
1301 | unsigned int (*get)(unsigned int cpu); | ||
1199 | unsigned int ret_freq = 0; | 1302 | unsigned int ret_freq = 0; |
1303 | u8 flags; | ||
1304 | |||
1200 | 1305 | ||
1201 | if (!cpufreq_driver->get) | 1306 | rcu_read_lock(); |
1307 | driver = rcu_dereference(cpufreq_driver); | ||
1308 | if (!driver->get) { | ||
1309 | rcu_read_unlock(); | ||
1202 | return ret_freq; | 1310 | return ret_freq; |
1311 | } | ||
1312 | flags = driver->flags; | ||
1313 | get = driver->get; | ||
1314 | rcu_read_unlock(); | ||
1203 | 1315 | ||
1204 | ret_freq = cpufreq_driver->get(cpu); | 1316 | ret_freq = get(cpu); |
1205 | 1317 | ||
1206 | if (ret_freq && policy->cur && | 1318 | if (ret_freq && policy->cur && |
1207 | !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { | 1319 | !(flags & CPUFREQ_CONST_LOOPS)) { |
1208 | /* verify no discrepancy between actual and | 1320 | /* verify no discrepancy between actual and |
1209 | saved value exists */ | 1321 | saved value exists */ |
1210 | if (unlikely(ret_freq != policy->cur)) { | 1322 | if (unlikely(ret_freq != policy->cur)) { |
@@ -1260,6 +1372,7 @@ static struct subsys_interface cpufreq_interface = { | |||
1260 | */ | 1372 | */ |
1261 | static int cpufreq_bp_suspend(void) | 1373 | static int cpufreq_bp_suspend(void) |
1262 | { | 1374 | { |
1375 | int (*suspend)(struct cpufreq_policy *policy); | ||
1263 | int ret = 0; | 1376 | int ret = 0; |
1264 | 1377 | ||
1265 | int cpu = smp_processor_id(); | 1378 | int cpu = smp_processor_id(); |
@@ -1272,8 +1385,11 @@ static int cpufreq_bp_suspend(void) | |||
1272 | if (!cpu_policy) | 1385 | if (!cpu_policy) |
1273 | return 0; | 1386 | return 0; |
1274 | 1387 | ||
1275 | if (cpufreq_driver->suspend) { | 1388 | rcu_read_lock(); |
1276 | ret = cpufreq_driver->suspend(cpu_policy); | 1389 | suspend = rcu_dereference(cpufreq_driver)->suspend; |
1390 | rcu_read_unlock(); | ||
1391 | if (suspend) { | ||
1392 | ret = suspend(cpu_policy); | ||
1277 | if (ret) | 1393 | if (ret) |
1278 | printk(KERN_ERR "cpufreq: suspend failed in ->suspend " | 1394 | printk(KERN_ERR "cpufreq: suspend failed in ->suspend " |
1279 | "step on CPU %u\n", cpu_policy->cpu); | 1395 | "step on CPU %u\n", cpu_policy->cpu); |
@@ -1299,6 +1415,7 @@ static int cpufreq_bp_suspend(void) | |||
1299 | static void cpufreq_bp_resume(void) | 1415 | static void cpufreq_bp_resume(void) |
1300 | { | 1416 | { |
1301 | int ret = 0; | 1417 | int ret = 0; |
1418 | int (*resume)(struct cpufreq_policy *policy); | ||
1302 | 1419 | ||
1303 | int cpu = smp_processor_id(); | 1420 | int cpu = smp_processor_id(); |
1304 | struct cpufreq_policy *cpu_policy; | 1421 | struct cpufreq_policy *cpu_policy; |
@@ -1310,8 +1427,12 @@ static void cpufreq_bp_resume(void) | |||
1310 | if (!cpu_policy) | 1427 | if (!cpu_policy) |
1311 | return; | 1428 | return; |
1312 | 1429 | ||
1313 | if (cpufreq_driver->resume) { | 1430 | rcu_read_lock(); |
1314 | ret = cpufreq_driver->resume(cpu_policy); | 1431 | resume = rcu_dereference(cpufreq_driver)->resume; |
1432 | rcu_read_unlock(); | ||
1433 | |||
1434 | if (resume) { | ||
1435 | ret = resume(cpu_policy); | ||
1315 | if (ret) { | 1436 | if (ret) { |
1316 | printk(KERN_ERR "cpufreq: resume failed in ->resume " | 1437 | printk(KERN_ERR "cpufreq: resume failed in ->resume " |
1317 | "step on CPU %u\n", cpu_policy->cpu); | 1438 | "step on CPU %u\n", cpu_policy->cpu); |
@@ -1338,10 +1459,14 @@ static struct syscore_ops cpufreq_syscore_ops = { | |||
1338 | */ | 1459 | */ |
1339 | const char *cpufreq_get_current_driver(void) | 1460 | const char *cpufreq_get_current_driver(void) |
1340 | { | 1461 | { |
1341 | if (cpufreq_driver) | 1462 | struct cpufreq_driver *driver; |
1342 | return cpufreq_driver->name; | 1463 | const char *name = NULL; |
1343 | 1464 | rcu_read_lock(); | |
1344 | return NULL; | 1465 | driver = rcu_dereference(cpufreq_driver); |
1466 | if (driver) | ||
1467 | name = driver->name; | ||
1468 | rcu_read_unlock(); | ||
1469 | return name; | ||
1345 | } | 1470 | } |
1346 | EXPORT_SYMBOL_GPL(cpufreq_get_current_driver); | 1471 | EXPORT_SYMBOL_GPL(cpufreq_get_current_driver); |
1347 | 1472 | ||
@@ -1435,6 +1560,9 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy, | |||
1435 | { | 1560 | { |
1436 | int retval = -EINVAL; | 1561 | int retval = -EINVAL; |
1437 | unsigned int old_target_freq = target_freq; | 1562 | unsigned int old_target_freq = target_freq; |
1563 | int (*target)(struct cpufreq_policy *policy, | ||
1564 | unsigned int target_freq, | ||
1565 | unsigned int relation); | ||
1438 | 1566 | ||
1439 | if (cpufreq_disabled()) | 1567 | if (cpufreq_disabled()) |
1440 | return -ENODEV; | 1568 | return -ENODEV; |
@@ -1451,8 +1579,11 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy, | |||
1451 | if (target_freq == policy->cur) | 1579 | if (target_freq == policy->cur) |
1452 | return 0; | 1580 | return 0; |
1453 | 1581 | ||
1454 | if (cpufreq_driver->target) | 1582 | rcu_read_lock(); |
1455 | retval = cpufreq_driver->target(policy, target_freq, relation); | 1583 | target = rcu_dereference(cpufreq_driver)->target; |
1584 | rcu_read_unlock(); | ||
1585 | if (target) | ||
1586 | retval = target(policy, target_freq, relation); | ||
1456 | 1587 | ||
1457 | return retval; | 1588 | return retval; |
1458 | } | 1589 | } |
@@ -1485,18 +1616,24 @@ EXPORT_SYMBOL_GPL(cpufreq_driver_target); | |||
1485 | int __cpufreq_driver_getavg(struct cpufreq_policy *policy, unsigned int cpu) | 1616 | int __cpufreq_driver_getavg(struct cpufreq_policy *policy, unsigned int cpu) |
1486 | { | 1617 | { |
1487 | int ret = 0; | 1618 | int ret = 0; |
1619 | unsigned int (*getavg)(struct cpufreq_policy *policy, | ||
1620 | unsigned int cpu); | ||
1488 | 1621 | ||
1489 | if (cpufreq_disabled()) | 1622 | if (cpufreq_disabled()) |
1490 | return ret; | 1623 | return ret; |
1491 | 1624 | ||
1492 | if (!cpufreq_driver->getavg) | 1625 | rcu_read_lock(); |
1626 | getavg = rcu_dereference(cpufreq_driver)->getavg; | ||
1627 | rcu_read_unlock(); | ||
1628 | |||
1629 | if (!getavg) | ||
1493 | return 0; | 1630 | return 0; |
1494 | 1631 | ||
1495 | policy = cpufreq_cpu_get(policy->cpu); | 1632 | policy = cpufreq_cpu_get(policy->cpu); |
1496 | if (!policy) | 1633 | if (!policy) |
1497 | return -EINVAL; | 1634 | return -EINVAL; |
1498 | 1635 | ||
1499 | ret = cpufreq_driver->getavg(policy, cpu); | 1636 | ret = getavg(policy, cpu); |
1500 | 1637 | ||
1501 | cpufreq_cpu_put(policy); | 1638 | cpufreq_cpu_put(policy); |
1502 | return ret; | 1639 | return ret; |
@@ -1544,10 +1681,12 @@ static int __cpufreq_governor(struct cpufreq_policy *policy, | |||
1544 | policy->cpu, event); | 1681 | policy->cpu, event); |
1545 | ret = policy->governor->governor(policy, event); | 1682 | ret = policy->governor->governor(policy, event); |
1546 | 1683 | ||
1547 | if (event == CPUFREQ_GOV_START) | 1684 | if (!ret) { |
1548 | policy->governor->initialized++; | 1685 | if (event == CPUFREQ_GOV_POLICY_INIT) |
1549 | else if (event == CPUFREQ_GOV_STOP) | 1686 | policy->governor->initialized++; |
1550 | policy->governor->initialized--; | 1687 | else if (event == CPUFREQ_GOV_POLICY_EXIT) |
1688 | policy->governor->initialized--; | ||
1689 | } | ||
1551 | 1690 | ||
1552 | /* we keep one module reference alive for | 1691 | /* we keep one module reference alive for |
1553 | each CPU governed by this CPU */ | 1692 | each CPU governed by this CPU */ |
@@ -1651,7 +1790,10 @@ EXPORT_SYMBOL(cpufreq_get_policy); | |||
1651 | static int __cpufreq_set_policy(struct cpufreq_policy *data, | 1790 | static int __cpufreq_set_policy(struct cpufreq_policy *data, |
1652 | struct cpufreq_policy *policy) | 1791 | struct cpufreq_policy *policy) |
1653 | { | 1792 | { |
1654 | int ret = 0; | 1793 | int ret = 0, failed = 1; |
1794 | struct cpufreq_driver *driver; | ||
1795 | int (*verify)(struct cpufreq_policy *policy); | ||
1796 | int (*setpolicy)(struct cpufreq_policy *policy); | ||
1655 | 1797 | ||
1656 | pr_debug("setting new policy for CPU %u: %u - %u kHz\n", policy->cpu, | 1798 | pr_debug("setting new policy for CPU %u: %u - %u kHz\n", policy->cpu, |
1657 | policy->min, policy->max); | 1799 | policy->min, policy->max); |
@@ -1665,7 +1807,13 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data, | |||
1665 | } | 1807 | } |
1666 | 1808 | ||
1667 | /* verify the cpu speed can be set within this limit */ | 1809 | /* verify the cpu speed can be set within this limit */ |
1668 | ret = cpufreq_driver->verify(policy); | 1810 | rcu_read_lock(); |
1811 | driver = rcu_dereference(cpufreq_driver); | ||
1812 | verify = driver->verify; | ||
1813 | setpolicy = driver->setpolicy; | ||
1814 | rcu_read_unlock(); | ||
1815 | |||
1816 | ret = verify(policy); | ||
1669 | if (ret) | 1817 | if (ret) |
1670 | goto error_out; | 1818 | goto error_out; |
1671 | 1819 | ||
@@ -1679,7 +1827,7 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data, | |||
1679 | 1827 | ||
1680 | /* verify the cpu speed can be set within this limit, | 1828 | /* verify the cpu speed can be set within this limit, |
1681 | which might be different to the first one */ | 1829 | which might be different to the first one */ |
1682 | ret = cpufreq_driver->verify(policy); | 1830 | ret = verify(policy); |
1683 | if (ret) | 1831 | if (ret) |
1684 | goto error_out; | 1832 | goto error_out; |
1685 | 1833 | ||
@@ -1693,10 +1841,10 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data, | |||
1693 | pr_debug("new min and max freqs are %u - %u kHz\n", | 1841 | pr_debug("new min and max freqs are %u - %u kHz\n", |
1694 | data->min, data->max); | 1842 | data->min, data->max); |
1695 | 1843 | ||
1696 | if (cpufreq_driver->setpolicy) { | 1844 | if (setpolicy) { |
1697 | data->policy = policy->policy; | 1845 | data->policy = policy->policy; |
1698 | pr_debug("setting range\n"); | 1846 | pr_debug("setting range\n"); |
1699 | ret = cpufreq_driver->setpolicy(policy); | 1847 | ret = setpolicy(policy); |
1700 | } else { | 1848 | } else { |
1701 | if (policy->governor != data->governor) { | 1849 | if (policy->governor != data->governor) { |
1702 | /* save old, working values */ | 1850 | /* save old, working values */ |
@@ -1705,18 +1853,31 @@ static int __cpufreq_set_policy(struct cpufreq_policy *data, | |||
1705 | pr_debug("governor switch\n"); | 1853 | pr_debug("governor switch\n"); |
1706 | 1854 | ||
1707 | /* end old governor */ | 1855 | /* end old governor */ |
1708 | if (data->governor) | 1856 | if (data->governor) { |
1709 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); | 1857 | __cpufreq_governor(data, CPUFREQ_GOV_STOP); |
1858 | __cpufreq_governor(data, | ||
1859 | CPUFREQ_GOV_POLICY_EXIT); | ||
1860 | } | ||
1710 | 1861 | ||
1711 | /* start new governor */ | 1862 | /* start new governor */ |
1712 | data->governor = policy->governor; | 1863 | data->governor = policy->governor; |
1713 | if (__cpufreq_governor(data, CPUFREQ_GOV_START)) { | 1864 | if (!__cpufreq_governor(data, CPUFREQ_GOV_POLICY_INIT)) { |
1865 | if (!__cpufreq_governor(data, CPUFREQ_GOV_START)) | ||
1866 | failed = 0; | ||
1867 | else | ||
1868 | __cpufreq_governor(data, | ||
1869 | CPUFREQ_GOV_POLICY_EXIT); | ||
1870 | } | ||
1871 | |||
1872 | if (failed) { | ||
1714 | /* new governor failed, so re-start old one */ | 1873 | /* new governor failed, so re-start old one */ |
1715 | pr_debug("starting governor %s failed\n", | 1874 | pr_debug("starting governor %s failed\n", |
1716 | data->governor->name); | 1875 | data->governor->name); |
1717 | if (old_gov) { | 1876 | if (old_gov) { |
1718 | data->governor = old_gov; | 1877 | data->governor = old_gov; |
1719 | __cpufreq_governor(data, | 1878 | __cpufreq_governor(data, |
1879 | CPUFREQ_GOV_POLICY_INIT); | ||
1880 | __cpufreq_governor(data, | ||
1720 | CPUFREQ_GOV_START); | 1881 | CPUFREQ_GOV_START); |
1721 | } | 1882 | } |
1722 | ret = -EINVAL; | 1883 | ret = -EINVAL; |
@@ -1743,6 +1904,11 @@ int cpufreq_update_policy(unsigned int cpu) | |||
1743 | { | 1904 | { |
1744 | struct cpufreq_policy *data = cpufreq_cpu_get(cpu); | 1905 | struct cpufreq_policy *data = cpufreq_cpu_get(cpu); |
1745 | struct cpufreq_policy policy; | 1906 | struct cpufreq_policy policy; |
1907 | struct cpufreq_driver *driver; | ||
1908 | unsigned int (*get)(unsigned int cpu); | ||
1909 | int (*target)(struct cpufreq_policy *policy, | ||
1910 | unsigned int target_freq, | ||
1911 | unsigned int relation); | ||
1746 | int ret; | 1912 | int ret; |
1747 | 1913 | ||
1748 | if (!data) { | 1914 | if (!data) { |
@@ -1764,13 +1930,18 @@ int cpufreq_update_policy(unsigned int cpu) | |||
1764 | 1930 | ||
1765 | /* BIOS might change freq behind our back | 1931 | /* BIOS might change freq behind our back |
1766 | -> ask driver for current freq and notify governors about a change */ | 1932 | -> ask driver for current freq and notify governors about a change */ |
1767 | if (cpufreq_driver->get) { | 1933 | rcu_read_lock(); |
1768 | policy.cur = cpufreq_driver->get(cpu); | 1934 | driver = rcu_access_pointer(cpufreq_driver); |
1935 | get = driver->get; | ||
1936 | target = driver->target; | ||
1937 | rcu_read_unlock(); | ||
1938 | if (get) { | ||
1939 | policy.cur = get(cpu); | ||
1769 | if (!data->cur) { | 1940 | if (!data->cur) { |
1770 | pr_debug("Driver did not initialize current freq"); | 1941 | pr_debug("Driver did not initialize current freq"); |
1771 | data->cur = policy.cur; | 1942 | data->cur = policy.cur; |
1772 | } else { | 1943 | } else { |
1773 | if (data->cur != policy.cur && cpufreq_driver->target) | 1944 | if (data->cur != policy.cur && target) |
1774 | cpufreq_out_of_sync(cpu, data->cur, | 1945 | cpufreq_out_of_sync(cpu, data->cur, |
1775 | policy.cur); | 1946 | policy.cur); |
1776 | } | 1947 | } |
@@ -1848,19 +2019,20 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) | |||
1848 | if (driver_data->setpolicy) | 2019 | if (driver_data->setpolicy) |
1849 | driver_data->flags |= CPUFREQ_CONST_LOOPS; | 2020 | driver_data->flags |= CPUFREQ_CONST_LOOPS; |
1850 | 2021 | ||
1851 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 2022 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
1852 | if (cpufreq_driver) { | 2023 | if (rcu_access_pointer(cpufreq_driver)) { |
1853 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 2024 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
1854 | return -EBUSY; | 2025 | return -EBUSY; |
1855 | } | 2026 | } |
1856 | cpufreq_driver = driver_data; | 2027 | rcu_assign_pointer(cpufreq_driver, driver_data); |
1857 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 2028 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
2029 | synchronize_rcu(); | ||
1858 | 2030 | ||
1859 | ret = subsys_interface_register(&cpufreq_interface); | 2031 | ret = subsys_interface_register(&cpufreq_interface); |
1860 | if (ret) | 2032 | if (ret) |
1861 | goto err_null_driver; | 2033 | goto err_null_driver; |
1862 | 2034 | ||
1863 | if (!(cpufreq_driver->flags & CPUFREQ_STICKY)) { | 2035 | if (!(driver_data->flags & CPUFREQ_STICKY)) { |
1864 | int i; | 2036 | int i; |
1865 | ret = -ENODEV; | 2037 | ret = -ENODEV; |
1866 | 2038 | ||
@@ -1886,9 +2058,10 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) | |||
1886 | err_if_unreg: | 2058 | err_if_unreg: |
1887 | subsys_interface_unregister(&cpufreq_interface); | 2059 | subsys_interface_unregister(&cpufreq_interface); |
1888 | err_null_driver: | 2060 | err_null_driver: |
1889 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 2061 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
1890 | cpufreq_driver = NULL; | 2062 | rcu_assign_pointer(cpufreq_driver, NULL); |
1891 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 2063 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
2064 | synchronize_rcu(); | ||
1892 | return ret; | 2065 | return ret; |
1893 | } | 2066 | } |
1894 | EXPORT_SYMBOL_GPL(cpufreq_register_driver); | 2067 | EXPORT_SYMBOL_GPL(cpufreq_register_driver); |
@@ -1905,18 +2078,25 @@ EXPORT_SYMBOL_GPL(cpufreq_register_driver); | |||
1905 | int cpufreq_unregister_driver(struct cpufreq_driver *driver) | 2078 | int cpufreq_unregister_driver(struct cpufreq_driver *driver) |
1906 | { | 2079 | { |
1907 | unsigned long flags; | 2080 | unsigned long flags; |
2081 | struct cpufreq_driver *old_driver; | ||
1908 | 2082 | ||
1909 | if (!cpufreq_driver || (driver != cpufreq_driver)) | 2083 | rcu_read_lock(); |
2084 | old_driver = rcu_access_pointer(cpufreq_driver); | ||
2085 | if (!old_driver || (driver != old_driver)) { | ||
2086 | rcu_read_unlock(); | ||
1910 | return -EINVAL; | 2087 | return -EINVAL; |
2088 | } | ||
2089 | rcu_read_unlock(); | ||
1911 | 2090 | ||
1912 | pr_debug("unregistering driver %s\n", driver->name); | 2091 | pr_debug("unregistering driver %s\n", driver->name); |
1913 | 2092 | ||
1914 | subsys_interface_unregister(&cpufreq_interface); | 2093 | subsys_interface_unregister(&cpufreq_interface); |
1915 | unregister_hotcpu_notifier(&cpufreq_cpu_notifier); | 2094 | unregister_hotcpu_notifier(&cpufreq_cpu_notifier); |
1916 | 2095 | ||
1917 | spin_lock_irqsave(&cpufreq_driver_lock, flags); | 2096 | write_lock_irqsave(&cpufreq_driver_lock, flags); |
1918 | cpufreq_driver = NULL; | 2097 | rcu_assign_pointer(cpufreq_driver, NULL); |
1919 | spin_unlock_irqrestore(&cpufreq_driver_lock, flags); | 2098 | write_unlock_irqrestore(&cpufreq_driver_lock, flags); |
2099 | synchronize_rcu(); | ||
1920 | 2100 | ||
1921 | return 0; | 2101 | return 0; |
1922 | } | 2102 | } |
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index 4fd0006b1291..0ceb2eff5a7e 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c | |||
@@ -20,6 +20,7 @@ | |||
20 | #include <linux/mutex.h> | 20 | #include <linux/mutex.h> |
21 | #include <linux/notifier.h> | 21 | #include <linux/notifier.h> |
22 | #include <linux/percpu-defs.h> | 22 | #include <linux/percpu-defs.h> |
23 | #include <linux/slab.h> | ||
23 | #include <linux/sysfs.h> | 24 | #include <linux/sysfs.h> |
24 | #include <linux/types.h> | 25 | #include <linux/types.h> |
25 | 26 | ||
@@ -28,25 +29,29 @@ | |||
28 | /* Conservative governor macros */ | 29 | /* Conservative governor macros */ |
29 | #define DEF_FREQUENCY_UP_THRESHOLD (80) | 30 | #define DEF_FREQUENCY_UP_THRESHOLD (80) |
30 | #define DEF_FREQUENCY_DOWN_THRESHOLD (20) | 31 | #define DEF_FREQUENCY_DOWN_THRESHOLD (20) |
32 | #define DEF_FREQUENCY_STEP (5) | ||
31 | #define DEF_SAMPLING_DOWN_FACTOR (1) | 33 | #define DEF_SAMPLING_DOWN_FACTOR (1) |
32 | #define MAX_SAMPLING_DOWN_FACTOR (10) | 34 | #define MAX_SAMPLING_DOWN_FACTOR (10) |
33 | 35 | ||
34 | static struct dbs_data cs_dbs_data; | ||
35 | static DEFINE_PER_CPU(struct cs_cpu_dbs_info_s, cs_cpu_dbs_info); | 36 | static DEFINE_PER_CPU(struct cs_cpu_dbs_info_s, cs_cpu_dbs_info); |
36 | 37 | ||
37 | static struct cs_dbs_tuners cs_tuners = { | 38 | static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners, |
38 | .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, | 39 | struct cpufreq_policy *policy) |
39 | .down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD, | 40 | { |
40 | .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, | 41 | unsigned int freq_target = (cs_tuners->freq_step * policy->max) / 100; |
41 | .ignore_nice = 0, | 42 | |
42 | .freq_step = 5, | 43 | /* max freq cannot be less than 100. But who knows... */ |
43 | }; | 44 | if (unlikely(freq_target == 0)) |
45 | freq_target = DEF_FREQUENCY_STEP; | ||
46 | |||
47 | return freq_target; | ||
48 | } | ||
44 | 49 | ||
45 | /* | 50 | /* |
46 | * Every sampling_rate, we check, if current idle time is less than 20% | 51 | * Every sampling_rate, we check, if current idle time is less than 20% |
47 | * (default), then we try to increase frequency Every sampling_rate * | 52 | * (default), then we try to increase frequency. Every sampling_rate * |
48 | * sampling_down_factor, we check, if current idle time is more than 80%, then | 53 | * sampling_down_factor, we check, if current idle time is more than 80% |
49 | * we try to decrease frequency | 54 | * (default), then we try to decrease frequency |
50 | * | 55 | * |
51 | * Any frequency increase takes it to the maximum frequency. Frequency reduction | 56 | * Any frequency increase takes it to the maximum frequency. Frequency reduction |
52 | * happens at minimum steps of 5% (default) of maximum frequency | 57 | * happens at minimum steps of 5% (default) of maximum frequency |
@@ -55,30 +60,25 @@ static void cs_check_cpu(int cpu, unsigned int load) | |||
55 | { | 60 | { |
56 | struct cs_cpu_dbs_info_s *dbs_info = &per_cpu(cs_cpu_dbs_info, cpu); | 61 | struct cs_cpu_dbs_info_s *dbs_info = &per_cpu(cs_cpu_dbs_info, cpu); |
57 | struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; | 62 | struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; |
58 | unsigned int freq_target; | 63 | struct dbs_data *dbs_data = policy->governor_data; |
64 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
59 | 65 | ||
60 | /* | 66 | /* |
61 | * break out if we 'cannot' reduce the speed as the user might | 67 | * break out if we 'cannot' reduce the speed as the user might |
62 | * want freq_step to be zero | 68 | * want freq_step to be zero |
63 | */ | 69 | */ |
64 | if (cs_tuners.freq_step == 0) | 70 | if (cs_tuners->freq_step == 0) |
65 | return; | 71 | return; |
66 | 72 | ||
67 | /* Check for frequency increase */ | 73 | /* Check for frequency increase */ |
68 | if (load > cs_tuners.up_threshold) { | 74 | if (load > cs_tuners->up_threshold) { |
69 | dbs_info->down_skip = 0; | 75 | dbs_info->down_skip = 0; |
70 | 76 | ||
71 | /* if we are already at full speed then break out early */ | 77 | /* if we are already at full speed then break out early */ |
72 | if (dbs_info->requested_freq == policy->max) | 78 | if (dbs_info->requested_freq == policy->max) |
73 | return; | 79 | return; |
74 | 80 | ||
75 | freq_target = (cs_tuners.freq_step * policy->max) / 100; | 81 | dbs_info->requested_freq += get_freq_target(cs_tuners, policy); |
76 | |||
77 | /* max freq cannot be less than 100. But who knows.... */ | ||
78 | if (unlikely(freq_target == 0)) | ||
79 | freq_target = 5; | ||
80 | |||
81 | dbs_info->requested_freq += freq_target; | ||
82 | if (dbs_info->requested_freq > policy->max) | 82 | if (dbs_info->requested_freq > policy->max) |
83 | dbs_info->requested_freq = policy->max; | 83 | dbs_info->requested_freq = policy->max; |
84 | 84 | ||
@@ -87,45 +87,48 @@ static void cs_check_cpu(int cpu, unsigned int load) | |||
87 | return; | 87 | return; |
88 | } | 88 | } |
89 | 89 | ||
90 | /* | 90 | /* if sampling_down_factor is active break out early */ |
91 | * The optimal frequency is the frequency that is the lowest that can | 91 | if (++dbs_info->down_skip < cs_tuners->sampling_down_factor) |
92 | * support the current CPU usage without triggering the up policy. To be | 92 | return; |
93 | * safe, we focus 10 points under the threshold. | 93 | dbs_info->down_skip = 0; |
94 | */ | ||
95 | if (load < (cs_tuners.down_threshold - 10)) { | ||
96 | freq_target = (cs_tuners.freq_step * policy->max) / 100; | ||
97 | |||
98 | dbs_info->requested_freq -= freq_target; | ||
99 | if (dbs_info->requested_freq < policy->min) | ||
100 | dbs_info->requested_freq = policy->min; | ||
101 | 94 | ||
95 | /* Check for frequency decrease */ | ||
96 | if (load < cs_tuners->down_threshold) { | ||
102 | /* | 97 | /* |
103 | * if we cannot reduce the frequency anymore, break out early | 98 | * if we cannot reduce the frequency anymore, break out early |
104 | */ | 99 | */ |
105 | if (policy->cur == policy->min) | 100 | if (policy->cur == policy->min) |
106 | return; | 101 | return; |
107 | 102 | ||
103 | dbs_info->requested_freq -= get_freq_target(cs_tuners, policy); | ||
104 | if (dbs_info->requested_freq < policy->min) | ||
105 | dbs_info->requested_freq = policy->min; | ||
106 | |||
108 | __cpufreq_driver_target(policy, dbs_info->requested_freq, | 107 | __cpufreq_driver_target(policy, dbs_info->requested_freq, |
109 | CPUFREQ_RELATION_H); | 108 | CPUFREQ_RELATION_L); |
110 | return; | 109 | return; |
111 | } | 110 | } |
112 | } | 111 | } |
113 | 112 | ||
114 | static void cs_dbs_timer(struct work_struct *work) | 113 | static void cs_dbs_timer(struct work_struct *work) |
115 | { | 114 | { |
116 | struct delayed_work *dw = to_delayed_work(work); | ||
117 | struct cs_cpu_dbs_info_s *dbs_info = container_of(work, | 115 | struct cs_cpu_dbs_info_s *dbs_info = container_of(work, |
118 | struct cs_cpu_dbs_info_s, cdbs.work.work); | 116 | struct cs_cpu_dbs_info_s, cdbs.work.work); |
119 | unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; | 117 | unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; |
120 | struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info, | 118 | struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info, |
121 | cpu); | 119 | cpu); |
122 | int delay = delay_for_sampling_rate(cs_tuners.sampling_rate); | 120 | struct dbs_data *dbs_data = dbs_info->cdbs.cur_policy->governor_data; |
121 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
122 | int delay = delay_for_sampling_rate(cs_tuners->sampling_rate); | ||
123 | bool modify_all = true; | ||
123 | 124 | ||
124 | mutex_lock(&core_dbs_info->cdbs.timer_mutex); | 125 | mutex_lock(&core_dbs_info->cdbs.timer_mutex); |
125 | if (need_load_eval(&core_dbs_info->cdbs, cs_tuners.sampling_rate)) | 126 | if (!need_load_eval(&core_dbs_info->cdbs, cs_tuners->sampling_rate)) |
126 | dbs_check_cpu(&cs_dbs_data, cpu); | 127 | modify_all = false; |
128 | else | ||
129 | dbs_check_cpu(dbs_data, cpu); | ||
127 | 130 | ||
128 | schedule_delayed_work_on(smp_processor_id(), dw, delay); | 131 | gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, delay, modify_all); |
129 | mutex_unlock(&core_dbs_info->cdbs.timer_mutex); | 132 | mutex_unlock(&core_dbs_info->cdbs.timer_mutex); |
130 | } | 133 | } |
131 | 134 | ||
@@ -154,16 +157,12 @@ static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val, | |||
154 | } | 157 | } |
155 | 158 | ||
156 | /************************** sysfs interface ************************/ | 159 | /************************** sysfs interface ************************/ |
157 | static ssize_t show_sampling_rate_min(struct kobject *kobj, | 160 | static struct common_dbs_data cs_dbs_cdata; |
158 | struct attribute *attr, char *buf) | ||
159 | { | ||
160 | return sprintf(buf, "%u\n", cs_dbs_data.min_sampling_rate); | ||
161 | } | ||
162 | 161 | ||
163 | static ssize_t store_sampling_down_factor(struct kobject *a, | 162 | static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, |
164 | struct attribute *b, | 163 | const char *buf, size_t count) |
165 | const char *buf, size_t count) | ||
166 | { | 164 | { |
165 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
167 | unsigned int input; | 166 | unsigned int input; |
168 | int ret; | 167 | int ret; |
169 | ret = sscanf(buf, "%u", &input); | 168 | ret = sscanf(buf, "%u", &input); |
@@ -171,13 +170,14 @@ static ssize_t store_sampling_down_factor(struct kobject *a, | |||
171 | if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1) | 170 | if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1) |
172 | return -EINVAL; | 171 | return -EINVAL; |
173 | 172 | ||
174 | cs_tuners.sampling_down_factor = input; | 173 | cs_tuners->sampling_down_factor = input; |
175 | return count; | 174 | return count; |
176 | } | 175 | } |
177 | 176 | ||
178 | static ssize_t store_sampling_rate(struct kobject *a, struct attribute *b, | 177 | static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf, |
179 | const char *buf, size_t count) | 178 | size_t count) |
180 | { | 179 | { |
180 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
181 | unsigned int input; | 181 | unsigned int input; |
182 | int ret; | 182 | int ret; |
183 | ret = sscanf(buf, "%u", &input); | 183 | ret = sscanf(buf, "%u", &input); |
@@ -185,43 +185,46 @@ static ssize_t store_sampling_rate(struct kobject *a, struct attribute *b, | |||
185 | if (ret != 1) | 185 | if (ret != 1) |
186 | return -EINVAL; | 186 | return -EINVAL; |
187 | 187 | ||
188 | cs_tuners.sampling_rate = max(input, cs_dbs_data.min_sampling_rate); | 188 | cs_tuners->sampling_rate = max(input, dbs_data->min_sampling_rate); |
189 | return count; | 189 | return count; |
190 | } | 190 | } |
191 | 191 | ||
192 | static ssize_t store_up_threshold(struct kobject *a, struct attribute *b, | 192 | static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, |
193 | const char *buf, size_t count) | 193 | size_t count) |
194 | { | 194 | { |
195 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
195 | unsigned int input; | 196 | unsigned int input; |
196 | int ret; | 197 | int ret; |
197 | ret = sscanf(buf, "%u", &input); | 198 | ret = sscanf(buf, "%u", &input); |
198 | 199 | ||
199 | if (ret != 1 || input > 100 || input <= cs_tuners.down_threshold) | 200 | if (ret != 1 || input > 100 || input <= cs_tuners->down_threshold) |
200 | return -EINVAL; | 201 | return -EINVAL; |
201 | 202 | ||
202 | cs_tuners.up_threshold = input; | 203 | cs_tuners->up_threshold = input; |
203 | return count; | 204 | return count; |
204 | } | 205 | } |
205 | 206 | ||
206 | static ssize_t store_down_threshold(struct kobject *a, struct attribute *b, | 207 | static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf, |
207 | const char *buf, size_t count) | 208 | size_t count) |
208 | { | 209 | { |
210 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
209 | unsigned int input; | 211 | unsigned int input; |
210 | int ret; | 212 | int ret; |
211 | ret = sscanf(buf, "%u", &input); | 213 | ret = sscanf(buf, "%u", &input); |
212 | 214 | ||
213 | /* cannot be lower than 11 otherwise freq will not fall */ | 215 | /* cannot be lower than 11 otherwise freq will not fall */ |
214 | if (ret != 1 || input < 11 || input > 100 || | 216 | if (ret != 1 || input < 11 || input > 100 || |
215 | input >= cs_tuners.up_threshold) | 217 | input >= cs_tuners->up_threshold) |
216 | return -EINVAL; | 218 | return -EINVAL; |
217 | 219 | ||
218 | cs_tuners.down_threshold = input; | 220 | cs_tuners->down_threshold = input; |
219 | return count; | 221 | return count; |
220 | } | 222 | } |
221 | 223 | ||
222 | static ssize_t store_ignore_nice_load(struct kobject *a, struct attribute *b, | 224 | static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf, |
223 | const char *buf, size_t count) | 225 | size_t count) |
224 | { | 226 | { |
227 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
225 | unsigned int input, j; | 228 | unsigned int input, j; |
226 | int ret; | 229 | int ret; |
227 | 230 | ||
@@ -232,27 +235,28 @@ static ssize_t store_ignore_nice_load(struct kobject *a, struct attribute *b, | |||
232 | if (input > 1) | 235 | if (input > 1) |
233 | input = 1; | 236 | input = 1; |
234 | 237 | ||
235 | if (input == cs_tuners.ignore_nice) /* nothing to do */ | 238 | if (input == cs_tuners->ignore_nice) /* nothing to do */ |
236 | return count; | 239 | return count; |
237 | 240 | ||
238 | cs_tuners.ignore_nice = input; | 241 | cs_tuners->ignore_nice = input; |
239 | 242 | ||
240 | /* we need to re-evaluate prev_cpu_idle */ | 243 | /* we need to re-evaluate prev_cpu_idle */ |
241 | for_each_online_cpu(j) { | 244 | for_each_online_cpu(j) { |
242 | struct cs_cpu_dbs_info_s *dbs_info; | 245 | struct cs_cpu_dbs_info_s *dbs_info; |
243 | dbs_info = &per_cpu(cs_cpu_dbs_info, j); | 246 | dbs_info = &per_cpu(cs_cpu_dbs_info, j); |
244 | dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j, | 247 | dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j, |
245 | &dbs_info->cdbs.prev_cpu_wall); | 248 | &dbs_info->cdbs.prev_cpu_wall, 0); |
246 | if (cs_tuners.ignore_nice) | 249 | if (cs_tuners->ignore_nice) |
247 | dbs_info->cdbs.prev_cpu_nice = | 250 | dbs_info->cdbs.prev_cpu_nice = |
248 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; | 251 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; |
249 | } | 252 | } |
250 | return count; | 253 | return count; |
251 | } | 254 | } |
252 | 255 | ||
253 | static ssize_t store_freq_step(struct kobject *a, struct attribute *b, | 256 | static ssize_t store_freq_step(struct dbs_data *dbs_data, const char *buf, |
254 | const char *buf, size_t count) | 257 | size_t count) |
255 | { | 258 | { |
259 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
256 | unsigned int input; | 260 | unsigned int input; |
257 | int ret; | 261 | int ret; |
258 | ret = sscanf(buf, "%u", &input); | 262 | ret = sscanf(buf, "%u", &input); |
@@ -267,43 +271,88 @@ static ssize_t store_freq_step(struct kobject *a, struct attribute *b, | |||
267 | * no need to test here if freq_step is zero as the user might actually | 271 | * no need to test here if freq_step is zero as the user might actually |
268 | * want this, they would be crazy though :) | 272 | * want this, they would be crazy though :) |
269 | */ | 273 | */ |
270 | cs_tuners.freq_step = input; | 274 | cs_tuners->freq_step = input; |
271 | return count; | 275 | return count; |
272 | } | 276 | } |
273 | 277 | ||
274 | show_one(cs, sampling_rate, sampling_rate); | 278 | show_store_one(cs, sampling_rate); |
275 | show_one(cs, sampling_down_factor, sampling_down_factor); | 279 | show_store_one(cs, sampling_down_factor); |
276 | show_one(cs, up_threshold, up_threshold); | 280 | show_store_one(cs, up_threshold); |
277 | show_one(cs, down_threshold, down_threshold); | 281 | show_store_one(cs, down_threshold); |
278 | show_one(cs, ignore_nice_load, ignore_nice); | 282 | show_store_one(cs, ignore_nice); |
279 | show_one(cs, freq_step, freq_step); | 283 | show_store_one(cs, freq_step); |
280 | 284 | declare_show_sampling_rate_min(cs); | |
281 | define_one_global_rw(sampling_rate); | 285 | |
282 | define_one_global_rw(sampling_down_factor); | 286 | gov_sys_pol_attr_rw(sampling_rate); |
283 | define_one_global_rw(up_threshold); | 287 | gov_sys_pol_attr_rw(sampling_down_factor); |
284 | define_one_global_rw(down_threshold); | 288 | gov_sys_pol_attr_rw(up_threshold); |
285 | define_one_global_rw(ignore_nice_load); | 289 | gov_sys_pol_attr_rw(down_threshold); |
286 | define_one_global_rw(freq_step); | 290 | gov_sys_pol_attr_rw(ignore_nice); |
287 | define_one_global_ro(sampling_rate_min); | 291 | gov_sys_pol_attr_rw(freq_step); |
288 | 292 | gov_sys_pol_attr_ro(sampling_rate_min); | |
289 | static struct attribute *dbs_attributes[] = { | 293 | |
290 | &sampling_rate_min.attr, | 294 | static struct attribute *dbs_attributes_gov_sys[] = { |
291 | &sampling_rate.attr, | 295 | &sampling_rate_min_gov_sys.attr, |
292 | &sampling_down_factor.attr, | 296 | &sampling_rate_gov_sys.attr, |
293 | &up_threshold.attr, | 297 | &sampling_down_factor_gov_sys.attr, |
294 | &down_threshold.attr, | 298 | &up_threshold_gov_sys.attr, |
295 | &ignore_nice_load.attr, | 299 | &down_threshold_gov_sys.attr, |
296 | &freq_step.attr, | 300 | &ignore_nice_gov_sys.attr, |
301 | &freq_step_gov_sys.attr, | ||
297 | NULL | 302 | NULL |
298 | }; | 303 | }; |
299 | 304 | ||
300 | static struct attribute_group cs_attr_group = { | 305 | static struct attribute_group cs_attr_group_gov_sys = { |
301 | .attrs = dbs_attributes, | 306 | .attrs = dbs_attributes_gov_sys, |
307 | .name = "conservative", | ||
308 | }; | ||
309 | |||
310 | static struct attribute *dbs_attributes_gov_pol[] = { | ||
311 | &sampling_rate_min_gov_pol.attr, | ||
312 | &sampling_rate_gov_pol.attr, | ||
313 | &sampling_down_factor_gov_pol.attr, | ||
314 | &up_threshold_gov_pol.attr, | ||
315 | &down_threshold_gov_pol.attr, | ||
316 | &ignore_nice_gov_pol.attr, | ||
317 | &freq_step_gov_pol.attr, | ||
318 | NULL | ||
319 | }; | ||
320 | |||
321 | static struct attribute_group cs_attr_group_gov_pol = { | ||
322 | .attrs = dbs_attributes_gov_pol, | ||
302 | .name = "conservative", | 323 | .name = "conservative", |
303 | }; | 324 | }; |
304 | 325 | ||
305 | /************************** sysfs end ************************/ | 326 | /************************** sysfs end ************************/ |
306 | 327 | ||
328 | static int cs_init(struct dbs_data *dbs_data) | ||
329 | { | ||
330 | struct cs_dbs_tuners *tuners; | ||
331 | |||
332 | tuners = kzalloc(sizeof(struct cs_dbs_tuners), GFP_KERNEL); | ||
333 | if (!tuners) { | ||
334 | pr_err("%s: kzalloc failed\n", __func__); | ||
335 | return -ENOMEM; | ||
336 | } | ||
337 | |||
338 | tuners->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; | ||
339 | tuners->down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD; | ||
340 | tuners->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; | ||
341 | tuners->ignore_nice = 0; | ||
342 | tuners->freq_step = DEF_FREQUENCY_STEP; | ||
343 | |||
344 | dbs_data->tuners = tuners; | ||
345 | dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * | ||
346 | jiffies_to_usecs(10); | ||
347 | mutex_init(&dbs_data->mutex); | ||
348 | return 0; | ||
349 | } | ||
350 | |||
351 | static void cs_exit(struct dbs_data *dbs_data) | ||
352 | { | ||
353 | kfree(dbs_data->tuners); | ||
354 | } | ||
355 | |||
307 | define_get_cpu_dbs_routines(cs_cpu_dbs_info); | 356 | define_get_cpu_dbs_routines(cs_cpu_dbs_info); |
308 | 357 | ||
309 | static struct notifier_block cs_cpufreq_notifier_block = { | 358 | static struct notifier_block cs_cpufreq_notifier_block = { |
@@ -314,21 +363,23 @@ static struct cs_ops cs_ops = { | |||
314 | .notifier_block = &cs_cpufreq_notifier_block, | 363 | .notifier_block = &cs_cpufreq_notifier_block, |
315 | }; | 364 | }; |
316 | 365 | ||
317 | static struct dbs_data cs_dbs_data = { | 366 | static struct common_dbs_data cs_dbs_cdata = { |
318 | .governor = GOV_CONSERVATIVE, | 367 | .governor = GOV_CONSERVATIVE, |
319 | .attr_group = &cs_attr_group, | 368 | .attr_group_gov_sys = &cs_attr_group_gov_sys, |
320 | .tuners = &cs_tuners, | 369 | .attr_group_gov_pol = &cs_attr_group_gov_pol, |
321 | .get_cpu_cdbs = get_cpu_cdbs, | 370 | .get_cpu_cdbs = get_cpu_cdbs, |
322 | .get_cpu_dbs_info_s = get_cpu_dbs_info_s, | 371 | .get_cpu_dbs_info_s = get_cpu_dbs_info_s, |
323 | .gov_dbs_timer = cs_dbs_timer, | 372 | .gov_dbs_timer = cs_dbs_timer, |
324 | .gov_check_cpu = cs_check_cpu, | 373 | .gov_check_cpu = cs_check_cpu, |
325 | .gov_ops = &cs_ops, | 374 | .gov_ops = &cs_ops, |
375 | .init = cs_init, | ||
376 | .exit = cs_exit, | ||
326 | }; | 377 | }; |
327 | 378 | ||
328 | static int cs_cpufreq_governor_dbs(struct cpufreq_policy *policy, | 379 | static int cs_cpufreq_governor_dbs(struct cpufreq_policy *policy, |
329 | unsigned int event) | 380 | unsigned int event) |
330 | { | 381 | { |
331 | return cpufreq_governor_dbs(&cs_dbs_data, policy, event); | 382 | return cpufreq_governor_dbs(policy, &cs_dbs_cdata, event); |
332 | } | 383 | } |
333 | 384 | ||
334 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE | 385 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE |
@@ -343,7 +394,6 @@ struct cpufreq_governor cpufreq_gov_conservative = { | |||
343 | 394 | ||
344 | static int __init cpufreq_gov_dbs_init(void) | 395 | static int __init cpufreq_gov_dbs_init(void) |
345 | { | 396 | { |
346 | mutex_init(&cs_dbs_data.mutex); | ||
347 | return cpufreq_register_governor(&cpufreq_gov_conservative); | 397 | return cpufreq_register_governor(&cpufreq_gov_conservative); |
348 | } | 398 | } |
349 | 399 | ||
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 5a76086ff09b..443442df113b 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c | |||
@@ -22,12 +22,29 @@ | |||
22 | #include <linux/export.h> | 22 | #include <linux/export.h> |
23 | #include <linux/kernel_stat.h> | 23 | #include <linux/kernel_stat.h> |
24 | #include <linux/mutex.h> | 24 | #include <linux/mutex.h> |
25 | #include <linux/slab.h> | ||
25 | #include <linux/tick.h> | 26 | #include <linux/tick.h> |
26 | #include <linux/types.h> | 27 | #include <linux/types.h> |
27 | #include <linux/workqueue.h> | 28 | #include <linux/workqueue.h> |
28 | 29 | ||
29 | #include "cpufreq_governor.h" | 30 | #include "cpufreq_governor.h" |
30 | 31 | ||
32 | static struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy) | ||
33 | { | ||
34 | if (have_governor_per_policy()) | ||
35 | return &policy->kobj; | ||
36 | else | ||
37 | return cpufreq_global_kobject; | ||
38 | } | ||
39 | |||
40 | static struct attribute_group *get_sysfs_attr(struct dbs_data *dbs_data) | ||
41 | { | ||
42 | if (have_governor_per_policy()) | ||
43 | return dbs_data->cdata->attr_group_gov_pol; | ||
44 | else | ||
45 | return dbs_data->cdata->attr_group_gov_sys; | ||
46 | } | ||
47 | |||
31 | static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) | 48 | static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) |
32 | { | 49 | { |
33 | u64 idle_time; | 50 | u64 idle_time; |
@@ -50,13 +67,13 @@ static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) | |||
50 | return cputime_to_usecs(idle_time); | 67 | return cputime_to_usecs(idle_time); |
51 | } | 68 | } |
52 | 69 | ||
53 | u64 get_cpu_idle_time(unsigned int cpu, u64 *wall) | 70 | u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy) |
54 | { | 71 | { |
55 | u64 idle_time = get_cpu_idle_time_us(cpu, NULL); | 72 | u64 idle_time = get_cpu_idle_time_us(cpu, io_busy ? wall : NULL); |
56 | 73 | ||
57 | if (idle_time == -1ULL) | 74 | if (idle_time == -1ULL) |
58 | return get_cpu_idle_time_jiffy(cpu, wall); | 75 | return get_cpu_idle_time_jiffy(cpu, wall); |
59 | else | 76 | else if (!io_busy) |
60 | idle_time += get_cpu_iowait_time_us(cpu, wall); | 77 | idle_time += get_cpu_iowait_time_us(cpu, wall); |
61 | 78 | ||
62 | return idle_time; | 79 | return idle_time; |
@@ -65,7 +82,7 @@ EXPORT_SYMBOL_GPL(get_cpu_idle_time); | |||
65 | 82 | ||
66 | void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) | 83 | void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) |
67 | { | 84 | { |
68 | struct cpu_dbs_common_info *cdbs = dbs_data->get_cpu_cdbs(cpu); | 85 | struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); |
69 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | 86 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; |
70 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | 87 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; |
71 | struct cpufreq_policy *policy; | 88 | struct cpufreq_policy *policy; |
@@ -73,7 +90,7 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) | |||
73 | unsigned int ignore_nice; | 90 | unsigned int ignore_nice; |
74 | unsigned int j; | 91 | unsigned int j; |
75 | 92 | ||
76 | if (dbs_data->governor == GOV_ONDEMAND) | 93 | if (dbs_data->cdata->governor == GOV_ONDEMAND) |
77 | ignore_nice = od_tuners->ignore_nice; | 94 | ignore_nice = od_tuners->ignore_nice; |
78 | else | 95 | else |
79 | ignore_nice = cs_tuners->ignore_nice; | 96 | ignore_nice = cs_tuners->ignore_nice; |
@@ -83,13 +100,22 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) | |||
83 | /* Get Absolute Load (in terms of freq for ondemand gov) */ | 100 | /* Get Absolute Load (in terms of freq for ondemand gov) */ |
84 | for_each_cpu(j, policy->cpus) { | 101 | for_each_cpu(j, policy->cpus) { |
85 | struct cpu_dbs_common_info *j_cdbs; | 102 | struct cpu_dbs_common_info *j_cdbs; |
86 | u64 cur_wall_time, cur_idle_time, cur_iowait_time; | 103 | u64 cur_wall_time, cur_idle_time; |
87 | unsigned int idle_time, wall_time, iowait_time; | 104 | unsigned int idle_time, wall_time; |
88 | unsigned int load; | 105 | unsigned int load; |
106 | int io_busy = 0; | ||
89 | 107 | ||
90 | j_cdbs = dbs_data->get_cpu_cdbs(j); | 108 | j_cdbs = dbs_data->cdata->get_cpu_cdbs(j); |
91 | 109 | ||
92 | cur_idle_time = get_cpu_idle_time(j, &cur_wall_time); | 110 | /* |
111 | * For the purpose of ondemand, waiting for disk IO is | ||
112 | * an indication that you're performance critical, and | ||
113 | * not that the system is actually idle. So do not add | ||
114 | * the iowait time to the cpu idle time. | ||
115 | */ | ||
116 | if (dbs_data->cdata->governor == GOV_ONDEMAND) | ||
117 | io_busy = od_tuners->io_is_busy; | ||
118 | cur_idle_time = get_cpu_idle_time(j, &cur_wall_time, io_busy); | ||
93 | 119 | ||
94 | wall_time = (unsigned int) | 120 | wall_time = (unsigned int) |
95 | (cur_wall_time - j_cdbs->prev_cpu_wall); | 121 | (cur_wall_time - j_cdbs->prev_cpu_wall); |
@@ -117,35 +143,12 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) | |||
117 | idle_time += jiffies_to_usecs(cur_nice_jiffies); | 143 | idle_time += jiffies_to_usecs(cur_nice_jiffies); |
118 | } | 144 | } |
119 | 145 | ||
120 | if (dbs_data->governor == GOV_ONDEMAND) { | ||
121 | struct od_cpu_dbs_info_s *od_j_dbs_info = | ||
122 | dbs_data->get_cpu_dbs_info_s(cpu); | ||
123 | |||
124 | cur_iowait_time = get_cpu_iowait_time_us(j, | ||
125 | &cur_wall_time); | ||
126 | if (cur_iowait_time == -1ULL) | ||
127 | cur_iowait_time = 0; | ||
128 | |||
129 | iowait_time = (unsigned int) (cur_iowait_time - | ||
130 | od_j_dbs_info->prev_cpu_iowait); | ||
131 | od_j_dbs_info->prev_cpu_iowait = cur_iowait_time; | ||
132 | |||
133 | /* | ||
134 | * For the purpose of ondemand, waiting for disk IO is | ||
135 | * an indication that you're performance critical, and | ||
136 | * not that the system is actually idle. So subtract the | ||
137 | * iowait time from the cpu idle time. | ||
138 | */ | ||
139 | if (od_tuners->io_is_busy && idle_time >= iowait_time) | ||
140 | idle_time -= iowait_time; | ||
141 | } | ||
142 | |||
143 | if (unlikely(!wall_time || wall_time < idle_time)) | 146 | if (unlikely(!wall_time || wall_time < idle_time)) |
144 | continue; | 147 | continue; |
145 | 148 | ||
146 | load = 100 * (wall_time - idle_time) / wall_time; | 149 | load = 100 * (wall_time - idle_time) / wall_time; |
147 | 150 | ||
148 | if (dbs_data->governor == GOV_ONDEMAND) { | 151 | if (dbs_data->cdata->governor == GOV_ONDEMAND) { |
149 | int freq_avg = __cpufreq_driver_getavg(policy, j); | 152 | int freq_avg = __cpufreq_driver_getavg(policy, j); |
150 | if (freq_avg <= 0) | 153 | if (freq_avg <= 0) |
151 | freq_avg = policy->cur; | 154 | freq_avg = policy->cur; |
@@ -157,24 +160,42 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) | |||
157 | max_load = load; | 160 | max_load = load; |
158 | } | 161 | } |
159 | 162 | ||
160 | dbs_data->gov_check_cpu(cpu, max_load); | 163 | dbs_data->cdata->gov_check_cpu(cpu, max_load); |
161 | } | 164 | } |
162 | EXPORT_SYMBOL_GPL(dbs_check_cpu); | 165 | EXPORT_SYMBOL_GPL(dbs_check_cpu); |
163 | 166 | ||
164 | static inline void dbs_timer_init(struct dbs_data *dbs_data, int cpu, | 167 | static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, |
165 | unsigned int sampling_rate) | 168 | unsigned int delay) |
166 | { | 169 | { |
167 | int delay = delay_for_sampling_rate(sampling_rate); | 170 | struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); |
168 | struct cpu_dbs_common_info *cdbs = dbs_data->get_cpu_cdbs(cpu); | ||
169 | 171 | ||
170 | schedule_delayed_work_on(cpu, &cdbs->work, delay); | 172 | mod_delayed_work_on(cpu, system_wq, &cdbs->work, delay); |
171 | } | 173 | } |
172 | 174 | ||
173 | static inline void dbs_timer_exit(struct dbs_data *dbs_data, int cpu) | 175 | void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, |
176 | unsigned int delay, bool all_cpus) | ||
174 | { | 177 | { |
175 | struct cpu_dbs_common_info *cdbs = dbs_data->get_cpu_cdbs(cpu); | 178 | int i; |
176 | 179 | ||
177 | cancel_delayed_work_sync(&cdbs->work); | 180 | if (!all_cpus) { |
181 | __gov_queue_work(smp_processor_id(), dbs_data, delay); | ||
182 | } else { | ||
183 | for_each_cpu(i, policy->cpus) | ||
184 | __gov_queue_work(i, dbs_data, delay); | ||
185 | } | ||
186 | } | ||
187 | EXPORT_SYMBOL_GPL(gov_queue_work); | ||
188 | |||
189 | static inline void gov_cancel_work(struct dbs_data *dbs_data, | ||
190 | struct cpufreq_policy *policy) | ||
191 | { | ||
192 | struct cpu_dbs_common_info *cdbs; | ||
193 | int i; | ||
194 | |||
195 | for_each_cpu(i, policy->cpus) { | ||
196 | cdbs = dbs_data->cdata->get_cpu_cdbs(i); | ||
197 | cancel_delayed_work_sync(&cdbs->work); | ||
198 | } | ||
178 | } | 199 | } |
179 | 200 | ||
180 | /* Will return if we need to evaluate cpu load again or not */ | 201 | /* Will return if we need to evaluate cpu load again or not */ |
@@ -196,31 +217,130 @@ bool need_load_eval(struct cpu_dbs_common_info *cdbs, | |||
196 | } | 217 | } |
197 | EXPORT_SYMBOL_GPL(need_load_eval); | 218 | EXPORT_SYMBOL_GPL(need_load_eval); |
198 | 219 | ||
199 | int cpufreq_governor_dbs(struct dbs_data *dbs_data, | 220 | static void set_sampling_rate(struct dbs_data *dbs_data, |
200 | struct cpufreq_policy *policy, unsigned int event) | 221 | unsigned int sampling_rate) |
201 | { | 222 | { |
223 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { | ||
224 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | ||
225 | cs_tuners->sampling_rate = sampling_rate; | ||
226 | } else { | ||
227 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
228 | od_tuners->sampling_rate = sampling_rate; | ||
229 | } | ||
230 | } | ||
231 | |||
232 | int cpufreq_governor_dbs(struct cpufreq_policy *policy, | ||
233 | struct common_dbs_data *cdata, unsigned int event) | ||
234 | { | ||
235 | struct dbs_data *dbs_data; | ||
202 | struct od_cpu_dbs_info_s *od_dbs_info = NULL; | 236 | struct od_cpu_dbs_info_s *od_dbs_info = NULL; |
203 | struct cs_cpu_dbs_info_s *cs_dbs_info = NULL; | 237 | struct cs_cpu_dbs_info_s *cs_dbs_info = NULL; |
204 | struct cs_ops *cs_ops = NULL; | ||
205 | struct od_ops *od_ops = NULL; | 238 | struct od_ops *od_ops = NULL; |
206 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | 239 | struct od_dbs_tuners *od_tuners = NULL; |
207 | struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; | 240 | struct cs_dbs_tuners *cs_tuners = NULL; |
208 | struct cpu_dbs_common_info *cpu_cdbs; | 241 | struct cpu_dbs_common_info *cpu_cdbs; |
209 | unsigned int *sampling_rate, latency, ignore_nice, j, cpu = policy->cpu; | 242 | unsigned int sampling_rate, latency, ignore_nice, j, cpu = policy->cpu; |
243 | int io_busy = 0; | ||
210 | int rc; | 244 | int rc; |
211 | 245 | ||
212 | cpu_cdbs = dbs_data->get_cpu_cdbs(cpu); | 246 | if (have_governor_per_policy()) |
247 | dbs_data = policy->governor_data; | ||
248 | else | ||
249 | dbs_data = cdata->gdbs_data; | ||
250 | |||
251 | WARN_ON(!dbs_data && (event != CPUFREQ_GOV_POLICY_INIT)); | ||
252 | |||
253 | switch (event) { | ||
254 | case CPUFREQ_GOV_POLICY_INIT: | ||
255 | if (have_governor_per_policy()) { | ||
256 | WARN_ON(dbs_data); | ||
257 | } else if (dbs_data) { | ||
258 | policy->governor_data = dbs_data; | ||
259 | return 0; | ||
260 | } | ||
261 | |||
262 | dbs_data = kzalloc(sizeof(*dbs_data), GFP_KERNEL); | ||
263 | if (!dbs_data) { | ||
264 | pr_err("%s: POLICY_INIT: kzalloc failed\n", __func__); | ||
265 | return -ENOMEM; | ||
266 | } | ||
267 | |||
268 | dbs_data->cdata = cdata; | ||
269 | rc = cdata->init(dbs_data); | ||
270 | if (rc) { | ||
271 | pr_err("%s: POLICY_INIT: init() failed\n", __func__); | ||
272 | kfree(dbs_data); | ||
273 | return rc; | ||
274 | } | ||
275 | |||
276 | rc = sysfs_create_group(get_governor_parent_kobj(policy), | ||
277 | get_sysfs_attr(dbs_data)); | ||
278 | if (rc) { | ||
279 | cdata->exit(dbs_data); | ||
280 | kfree(dbs_data); | ||
281 | return rc; | ||
282 | } | ||
283 | |||
284 | policy->governor_data = dbs_data; | ||
213 | 285 | ||
214 | if (dbs_data->governor == GOV_CONSERVATIVE) { | 286 | /* policy latency is in nS. Convert it to uS first */ |
215 | cs_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu); | 287 | latency = policy->cpuinfo.transition_latency / 1000; |
216 | sampling_rate = &cs_tuners->sampling_rate; | 288 | if (latency == 0) |
289 | latency = 1; | ||
290 | |||
291 | /* Bring kernel and HW constraints together */ | ||
292 | dbs_data->min_sampling_rate = max(dbs_data->min_sampling_rate, | ||
293 | MIN_LATENCY_MULTIPLIER * latency); | ||
294 | set_sampling_rate(dbs_data, max(dbs_data->min_sampling_rate, | ||
295 | latency * LATENCY_MULTIPLIER)); | ||
296 | |||
297 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { | ||
298 | struct cs_ops *cs_ops = dbs_data->cdata->gov_ops; | ||
299 | |||
300 | cpufreq_register_notifier(cs_ops->notifier_block, | ||
301 | CPUFREQ_TRANSITION_NOTIFIER); | ||
302 | } | ||
303 | |||
304 | if (!have_governor_per_policy()) | ||
305 | cdata->gdbs_data = dbs_data; | ||
306 | |||
307 | return 0; | ||
308 | case CPUFREQ_GOV_POLICY_EXIT: | ||
309 | if ((policy->governor->initialized == 1) || | ||
310 | have_governor_per_policy()) { | ||
311 | sysfs_remove_group(get_governor_parent_kobj(policy), | ||
312 | get_sysfs_attr(dbs_data)); | ||
313 | |||
314 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { | ||
315 | struct cs_ops *cs_ops = dbs_data->cdata->gov_ops; | ||
316 | |||
317 | cpufreq_unregister_notifier(cs_ops->notifier_block, | ||
318 | CPUFREQ_TRANSITION_NOTIFIER); | ||
319 | } | ||
320 | |||
321 | cdata->exit(dbs_data); | ||
322 | kfree(dbs_data); | ||
323 | cdata->gdbs_data = NULL; | ||
324 | } | ||
325 | |||
326 | policy->governor_data = NULL; | ||
327 | return 0; | ||
328 | } | ||
329 | |||
330 | cpu_cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); | ||
331 | |||
332 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { | ||
333 | cs_tuners = dbs_data->tuners; | ||
334 | cs_dbs_info = dbs_data->cdata->get_cpu_dbs_info_s(cpu); | ||
335 | sampling_rate = cs_tuners->sampling_rate; | ||
217 | ignore_nice = cs_tuners->ignore_nice; | 336 | ignore_nice = cs_tuners->ignore_nice; |
218 | cs_ops = dbs_data->gov_ops; | ||
219 | } else { | 337 | } else { |
220 | od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu); | 338 | od_tuners = dbs_data->tuners; |
221 | sampling_rate = &od_tuners->sampling_rate; | 339 | od_dbs_info = dbs_data->cdata->get_cpu_dbs_info_s(cpu); |
340 | sampling_rate = od_tuners->sampling_rate; | ||
222 | ignore_nice = od_tuners->ignore_nice; | 341 | ignore_nice = od_tuners->ignore_nice; |
223 | od_ops = dbs_data->gov_ops; | 342 | od_ops = dbs_data->cdata->gov_ops; |
343 | io_busy = od_tuners->io_is_busy; | ||
224 | } | 344 | } |
225 | 345 | ||
226 | switch (event) { | 346 | switch (event) { |
@@ -232,96 +352,53 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data, | |||
232 | 352 | ||
233 | for_each_cpu(j, policy->cpus) { | 353 | for_each_cpu(j, policy->cpus) { |
234 | struct cpu_dbs_common_info *j_cdbs = | 354 | struct cpu_dbs_common_info *j_cdbs = |
235 | dbs_data->get_cpu_cdbs(j); | 355 | dbs_data->cdata->get_cpu_cdbs(j); |
236 | 356 | ||
237 | j_cdbs->cpu = j; | 357 | j_cdbs->cpu = j; |
238 | j_cdbs->cur_policy = policy; | 358 | j_cdbs->cur_policy = policy; |
239 | j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, | 359 | j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, |
240 | &j_cdbs->prev_cpu_wall); | 360 | &j_cdbs->prev_cpu_wall, io_busy); |
241 | if (ignore_nice) | 361 | if (ignore_nice) |
242 | j_cdbs->prev_cpu_nice = | 362 | j_cdbs->prev_cpu_nice = |
243 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; | 363 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; |
244 | 364 | ||
245 | mutex_init(&j_cdbs->timer_mutex); | 365 | mutex_init(&j_cdbs->timer_mutex); |
246 | INIT_DEFERRABLE_WORK(&j_cdbs->work, | 366 | INIT_DEFERRABLE_WORK(&j_cdbs->work, |
247 | dbs_data->gov_dbs_timer); | 367 | dbs_data->cdata->gov_dbs_timer); |
248 | } | ||
249 | |||
250 | if (!policy->governor->initialized) { | ||
251 | rc = sysfs_create_group(cpufreq_global_kobject, | ||
252 | dbs_data->attr_group); | ||
253 | if (rc) { | ||
254 | mutex_unlock(&dbs_data->mutex); | ||
255 | return rc; | ||
256 | } | ||
257 | } | 368 | } |
258 | 369 | ||
259 | /* | 370 | /* |
260 | * conservative does not implement micro like ondemand | 371 | * conservative does not implement micro like ondemand |
261 | * governor, thus we are bound to jiffes/HZ | 372 | * governor, thus we are bound to jiffes/HZ |
262 | */ | 373 | */ |
263 | if (dbs_data->governor == GOV_CONSERVATIVE) { | 374 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { |
264 | cs_dbs_info->down_skip = 0; | 375 | cs_dbs_info->down_skip = 0; |
265 | cs_dbs_info->enable = 1; | 376 | cs_dbs_info->enable = 1; |
266 | cs_dbs_info->requested_freq = policy->cur; | 377 | cs_dbs_info->requested_freq = policy->cur; |
267 | |||
268 | if (!policy->governor->initialized) { | ||
269 | cpufreq_register_notifier(cs_ops->notifier_block, | ||
270 | CPUFREQ_TRANSITION_NOTIFIER); | ||
271 | |||
272 | dbs_data->min_sampling_rate = | ||
273 | MIN_SAMPLING_RATE_RATIO * | ||
274 | jiffies_to_usecs(10); | ||
275 | } | ||
276 | } else { | 378 | } else { |
277 | od_dbs_info->rate_mult = 1; | 379 | od_dbs_info->rate_mult = 1; |
278 | od_dbs_info->sample_type = OD_NORMAL_SAMPLE; | 380 | od_dbs_info->sample_type = OD_NORMAL_SAMPLE; |
279 | od_ops->powersave_bias_init_cpu(cpu); | 381 | od_ops->powersave_bias_init_cpu(cpu); |
280 | |||
281 | if (!policy->governor->initialized) | ||
282 | od_tuners->io_is_busy = od_ops->io_busy(); | ||
283 | } | 382 | } |
284 | 383 | ||
285 | if (policy->governor->initialized) | ||
286 | goto unlock; | ||
287 | |||
288 | /* policy latency is in nS. Convert it to uS first */ | ||
289 | latency = policy->cpuinfo.transition_latency / 1000; | ||
290 | if (latency == 0) | ||
291 | latency = 1; | ||
292 | |||
293 | /* Bring kernel and HW constraints together */ | ||
294 | dbs_data->min_sampling_rate = max(dbs_data->min_sampling_rate, | ||
295 | MIN_LATENCY_MULTIPLIER * latency); | ||
296 | *sampling_rate = max(dbs_data->min_sampling_rate, latency * | ||
297 | LATENCY_MULTIPLIER); | ||
298 | unlock: | ||
299 | mutex_unlock(&dbs_data->mutex); | 384 | mutex_unlock(&dbs_data->mutex); |
300 | 385 | ||
301 | /* Initiate timer time stamp */ | 386 | /* Initiate timer time stamp */ |
302 | cpu_cdbs->time_stamp = ktime_get(); | 387 | cpu_cdbs->time_stamp = ktime_get(); |
303 | 388 | ||
304 | for_each_cpu(j, policy->cpus) | 389 | gov_queue_work(dbs_data, policy, |
305 | dbs_timer_init(dbs_data, j, *sampling_rate); | 390 | delay_for_sampling_rate(sampling_rate), true); |
306 | break; | 391 | break; |
307 | 392 | ||
308 | case CPUFREQ_GOV_STOP: | 393 | case CPUFREQ_GOV_STOP: |
309 | if (dbs_data->governor == GOV_CONSERVATIVE) | 394 | if (dbs_data->cdata->governor == GOV_CONSERVATIVE) |
310 | cs_dbs_info->enable = 0; | 395 | cs_dbs_info->enable = 0; |
311 | 396 | ||
312 | for_each_cpu(j, policy->cpus) | 397 | gov_cancel_work(dbs_data, policy); |
313 | dbs_timer_exit(dbs_data, j); | ||
314 | 398 | ||
315 | mutex_lock(&dbs_data->mutex); | 399 | mutex_lock(&dbs_data->mutex); |
316 | mutex_destroy(&cpu_cdbs->timer_mutex); | 400 | mutex_destroy(&cpu_cdbs->timer_mutex); |
317 | 401 | ||
318 | if (policy->governor->initialized == 1) { | ||
319 | sysfs_remove_group(cpufreq_global_kobject, | ||
320 | dbs_data->attr_group); | ||
321 | if (dbs_data->governor == GOV_CONSERVATIVE) | ||
322 | cpufreq_unregister_notifier(cs_ops->notifier_block, | ||
323 | CPUFREQ_TRANSITION_NOTIFIER); | ||
324 | } | ||
325 | mutex_unlock(&dbs_data->mutex); | 402 | mutex_unlock(&dbs_data->mutex); |
326 | 403 | ||
327 | break; | 404 | break; |
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index cc4bd2f6838a..8ac33538d0bd 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h | |||
@@ -34,20 +34,81 @@ | |||
34 | */ | 34 | */ |
35 | #define MIN_SAMPLING_RATE_RATIO (2) | 35 | #define MIN_SAMPLING_RATE_RATIO (2) |
36 | #define LATENCY_MULTIPLIER (1000) | 36 | #define LATENCY_MULTIPLIER (1000) |
37 | #define MIN_LATENCY_MULTIPLIER (100) | 37 | #define MIN_LATENCY_MULTIPLIER (20) |
38 | #define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000) | 38 | #define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000) |
39 | 39 | ||
40 | /* Ondemand Sampling types */ | 40 | /* Ondemand Sampling types */ |
41 | enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE}; | 41 | enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE}; |
42 | 42 | ||
43 | /* Macro creating sysfs show routines */ | 43 | /* |
44 | #define show_one(_gov, file_name, object) \ | 44 | * Macro for creating governors sysfs routines |
45 | static ssize_t show_##file_name \ | 45 | * |
46 | * - gov_sys: One governor instance per whole system | ||
47 | * - gov_pol: One governor instance per policy | ||
48 | */ | ||
49 | |||
50 | /* Create attributes */ | ||
51 | #define gov_sys_attr_ro(_name) \ | ||
52 | static struct global_attr _name##_gov_sys = \ | ||
53 | __ATTR(_name, 0444, show_##_name##_gov_sys, NULL) | ||
54 | |||
55 | #define gov_sys_attr_rw(_name) \ | ||
56 | static struct global_attr _name##_gov_sys = \ | ||
57 | __ATTR(_name, 0644, show_##_name##_gov_sys, store_##_name##_gov_sys) | ||
58 | |||
59 | #define gov_pol_attr_ro(_name) \ | ||
60 | static struct freq_attr _name##_gov_pol = \ | ||
61 | __ATTR(_name, 0444, show_##_name##_gov_pol, NULL) | ||
62 | |||
63 | #define gov_pol_attr_rw(_name) \ | ||
64 | static struct freq_attr _name##_gov_pol = \ | ||
65 | __ATTR(_name, 0644, show_##_name##_gov_pol, store_##_name##_gov_pol) | ||
66 | |||
67 | #define gov_sys_pol_attr_rw(_name) \ | ||
68 | gov_sys_attr_rw(_name); \ | ||
69 | gov_pol_attr_rw(_name) | ||
70 | |||
71 | #define gov_sys_pol_attr_ro(_name) \ | ||
72 | gov_sys_attr_ro(_name); \ | ||
73 | gov_pol_attr_ro(_name) | ||
74 | |||
75 | /* Create show/store routines */ | ||
76 | #define show_one(_gov, file_name) \ | ||
77 | static ssize_t show_##file_name##_gov_sys \ | ||
46 | (struct kobject *kobj, struct attribute *attr, char *buf) \ | 78 | (struct kobject *kobj, struct attribute *attr, char *buf) \ |
47 | { \ | 79 | { \ |
48 | return sprintf(buf, "%u\n", _gov##_tuners.object); \ | 80 | struct _gov##_dbs_tuners *tuners = _gov##_dbs_cdata.gdbs_data->tuners; \ |
81 | return sprintf(buf, "%u\n", tuners->file_name); \ | ||
82 | } \ | ||
83 | \ | ||
84 | static ssize_t show_##file_name##_gov_pol \ | ||
85 | (struct cpufreq_policy *policy, char *buf) \ | ||
86 | { \ | ||
87 | struct dbs_data *dbs_data = policy->governor_data; \ | ||
88 | struct _gov##_dbs_tuners *tuners = dbs_data->tuners; \ | ||
89 | return sprintf(buf, "%u\n", tuners->file_name); \ | ||
90 | } | ||
91 | |||
92 | #define store_one(_gov, file_name) \ | ||
93 | static ssize_t store_##file_name##_gov_sys \ | ||
94 | (struct kobject *kobj, struct attribute *attr, const char *buf, size_t count) \ | ||
95 | { \ | ||
96 | struct dbs_data *dbs_data = _gov##_dbs_cdata.gdbs_data; \ | ||
97 | return store_##file_name(dbs_data, buf, count); \ | ||
98 | } \ | ||
99 | \ | ||
100 | static ssize_t store_##file_name##_gov_pol \ | ||
101 | (struct cpufreq_policy *policy, const char *buf, size_t count) \ | ||
102 | { \ | ||
103 | struct dbs_data *dbs_data = policy->governor_data; \ | ||
104 | return store_##file_name(dbs_data, buf, count); \ | ||
49 | } | 105 | } |
50 | 106 | ||
107 | #define show_store_one(_gov, file_name) \ | ||
108 | show_one(_gov, file_name); \ | ||
109 | store_one(_gov, file_name) | ||
110 | |||
111 | /* create helper routines */ | ||
51 | #define define_get_cpu_dbs_routines(_dbs_info) \ | 112 | #define define_get_cpu_dbs_routines(_dbs_info) \ |
52 | static struct cpu_dbs_common_info *get_cpu_cdbs(int cpu) \ | 113 | static struct cpu_dbs_common_info *get_cpu_cdbs(int cpu) \ |
53 | { \ | 114 | { \ |
@@ -87,7 +148,6 @@ struct cpu_dbs_common_info { | |||
87 | 148 | ||
88 | struct od_cpu_dbs_info_s { | 149 | struct od_cpu_dbs_info_s { |
89 | struct cpu_dbs_common_info cdbs; | 150 | struct cpu_dbs_common_info cdbs; |
90 | u64 prev_cpu_iowait; | ||
91 | struct cpufreq_frequency_table *freq_table; | 151 | struct cpufreq_frequency_table *freq_table; |
92 | unsigned int freq_lo; | 152 | unsigned int freq_lo; |
93 | unsigned int freq_lo_jiffies; | 153 | unsigned int freq_lo_jiffies; |
@@ -103,7 +163,7 @@ struct cs_cpu_dbs_info_s { | |||
103 | unsigned int enable:1; | 163 | unsigned int enable:1; |
104 | }; | 164 | }; |
105 | 165 | ||
106 | /* Governers sysfs tunables */ | 166 | /* Per policy Governers sysfs tunables */ |
107 | struct od_dbs_tuners { | 167 | struct od_dbs_tuners { |
108 | unsigned int ignore_nice; | 168 | unsigned int ignore_nice; |
109 | unsigned int sampling_rate; | 169 | unsigned int sampling_rate; |
@@ -123,31 +183,42 @@ struct cs_dbs_tuners { | |||
123 | unsigned int freq_step; | 183 | unsigned int freq_step; |
124 | }; | 184 | }; |
125 | 185 | ||
126 | /* Per Governer data */ | 186 | /* Common Governer data across policies */ |
127 | struct dbs_data { | 187 | struct dbs_data; |
188 | struct common_dbs_data { | ||
128 | /* Common across governors */ | 189 | /* Common across governors */ |
129 | #define GOV_ONDEMAND 0 | 190 | #define GOV_ONDEMAND 0 |
130 | #define GOV_CONSERVATIVE 1 | 191 | #define GOV_CONSERVATIVE 1 |
131 | int governor; | 192 | int governor; |
132 | unsigned int min_sampling_rate; | 193 | struct attribute_group *attr_group_gov_sys; /* one governor - system */ |
133 | struct attribute_group *attr_group; | 194 | struct attribute_group *attr_group_gov_pol; /* one governor - policy */ |
134 | void *tuners; | ||
135 | 195 | ||
136 | /* dbs_mutex protects dbs_enable in governor start/stop */ | 196 | /* Common data for platforms that don't set have_governor_per_policy */ |
137 | struct mutex mutex; | 197 | struct dbs_data *gdbs_data; |
138 | 198 | ||
139 | struct cpu_dbs_common_info *(*get_cpu_cdbs)(int cpu); | 199 | struct cpu_dbs_common_info *(*get_cpu_cdbs)(int cpu); |
140 | void *(*get_cpu_dbs_info_s)(int cpu); | 200 | void *(*get_cpu_dbs_info_s)(int cpu); |
141 | void (*gov_dbs_timer)(struct work_struct *work); | 201 | void (*gov_dbs_timer)(struct work_struct *work); |
142 | void (*gov_check_cpu)(int cpu, unsigned int load); | 202 | void (*gov_check_cpu)(int cpu, unsigned int load); |
203 | int (*init)(struct dbs_data *dbs_data); | ||
204 | void (*exit)(struct dbs_data *dbs_data); | ||
143 | 205 | ||
144 | /* Governor specific ops, see below */ | 206 | /* Governor specific ops, see below */ |
145 | void *gov_ops; | 207 | void *gov_ops; |
146 | }; | 208 | }; |
147 | 209 | ||
210 | /* Governer Per policy data */ | ||
211 | struct dbs_data { | ||
212 | struct common_dbs_data *cdata; | ||
213 | unsigned int min_sampling_rate; | ||
214 | void *tuners; | ||
215 | |||
216 | /* dbs_mutex protects dbs_enable in governor start/stop */ | ||
217 | struct mutex mutex; | ||
218 | }; | ||
219 | |||
148 | /* Governor specific ops, will be passed to dbs_data->gov_ops */ | 220 | /* Governor specific ops, will be passed to dbs_data->gov_ops */ |
149 | struct od_ops { | 221 | struct od_ops { |
150 | int (*io_busy)(void); | ||
151 | void (*powersave_bias_init_cpu)(int cpu); | 222 | void (*powersave_bias_init_cpu)(int cpu); |
152 | unsigned int (*powersave_bias_target)(struct cpufreq_policy *policy, | 223 | unsigned int (*powersave_bias_target)(struct cpufreq_policy *policy, |
153 | unsigned int freq_next, unsigned int relation); | 224 | unsigned int freq_next, unsigned int relation); |
@@ -169,10 +240,31 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate) | |||
169 | return delay; | 240 | return delay; |
170 | } | 241 | } |
171 | 242 | ||
172 | u64 get_cpu_idle_time(unsigned int cpu, u64 *wall); | 243 | #define declare_show_sampling_rate_min(_gov) \ |
244 | static ssize_t show_sampling_rate_min_gov_sys \ | ||
245 | (struct kobject *kobj, struct attribute *attr, char *buf) \ | ||
246 | { \ | ||
247 | struct dbs_data *dbs_data = _gov##_dbs_cdata.gdbs_data; \ | ||
248 | return sprintf(buf, "%u\n", dbs_data->min_sampling_rate); \ | ||
249 | } \ | ||
250 | \ | ||
251 | static ssize_t show_sampling_rate_min_gov_pol \ | ||
252 | (struct cpufreq_policy *policy, char *buf) \ | ||
253 | { \ | ||
254 | struct dbs_data *dbs_data = policy->governor_data; \ | ||
255 | return sprintf(buf, "%u\n", dbs_data->min_sampling_rate); \ | ||
256 | } | ||
257 | |||
258 | u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy); | ||
173 | void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); | 259 | void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); |
174 | bool need_load_eval(struct cpu_dbs_common_info *cdbs, | 260 | bool need_load_eval(struct cpu_dbs_common_info *cdbs, |
175 | unsigned int sampling_rate); | 261 | unsigned int sampling_rate); |
176 | int cpufreq_governor_dbs(struct dbs_data *dbs_data, | 262 | int cpufreq_governor_dbs(struct cpufreq_policy *policy, |
177 | struct cpufreq_policy *policy, unsigned int event); | 263 | struct common_dbs_data *cdata, unsigned int event); |
264 | void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, | ||
265 | unsigned int delay, bool all_cpus); | ||
266 | void od_register_powersave_bias_handler(unsigned int (*f) | ||
267 | (struct cpufreq_policy *, unsigned int, unsigned int), | ||
268 | unsigned int powersave_bias); | ||
269 | void od_unregister_powersave_bias_handler(void); | ||
178 | #endif /* _CPUFREQ_GOVERNOR_H */ | 270 | #endif /* _CPUFREQ_GOVERNOR_H */ |
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index f3eb26cd848f..b0ffef96bf77 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c | |||
@@ -20,9 +20,11 @@ | |||
20 | #include <linux/module.h> | 20 | #include <linux/module.h> |
21 | #include <linux/mutex.h> | 21 | #include <linux/mutex.h> |
22 | #include <linux/percpu-defs.h> | 22 | #include <linux/percpu-defs.h> |
23 | #include <linux/slab.h> | ||
23 | #include <linux/sysfs.h> | 24 | #include <linux/sysfs.h> |
24 | #include <linux/tick.h> | 25 | #include <linux/tick.h> |
25 | #include <linux/types.h> | 26 | #include <linux/types.h> |
27 | #include <linux/cpu.h> | ||
26 | 28 | ||
27 | #include "cpufreq_governor.h" | 29 | #include "cpufreq_governor.h" |
28 | 30 | ||
@@ -37,22 +39,14 @@ | |||
37 | #define MIN_FREQUENCY_UP_THRESHOLD (11) | 39 | #define MIN_FREQUENCY_UP_THRESHOLD (11) |
38 | #define MAX_FREQUENCY_UP_THRESHOLD (100) | 40 | #define MAX_FREQUENCY_UP_THRESHOLD (100) |
39 | 41 | ||
40 | static struct dbs_data od_dbs_data; | ||
41 | static DEFINE_PER_CPU(struct od_cpu_dbs_info_s, od_cpu_dbs_info); | 42 | static DEFINE_PER_CPU(struct od_cpu_dbs_info_s, od_cpu_dbs_info); |
42 | 43 | ||
44 | static struct od_ops od_ops; | ||
45 | |||
43 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND | 46 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND |
44 | static struct cpufreq_governor cpufreq_gov_ondemand; | 47 | static struct cpufreq_governor cpufreq_gov_ondemand; |
45 | #endif | 48 | #endif |
46 | 49 | ||
47 | static struct od_dbs_tuners od_tuners = { | ||
48 | .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, | ||
49 | .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, | ||
50 | .adj_up_threshold = DEF_FREQUENCY_UP_THRESHOLD - | ||
51 | DEF_FREQUENCY_DOWN_DIFFERENTIAL, | ||
52 | .ignore_nice = 0, | ||
53 | .powersave_bias = 0, | ||
54 | }; | ||
55 | |||
56 | static void ondemand_powersave_bias_init_cpu(int cpu) | 50 | static void ondemand_powersave_bias_init_cpu(int cpu) |
57 | { | 51 | { |
58 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); | 52 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); |
@@ -89,7 +83,7 @@ static int should_io_be_busy(void) | |||
89 | * Returns the freq_hi to be used right now and will set freq_hi_jiffies, | 83 | * Returns the freq_hi to be used right now and will set freq_hi_jiffies, |
90 | * freq_lo, and freq_lo_jiffies in percpu area for averaging freqs. | 84 | * freq_lo, and freq_lo_jiffies in percpu area for averaging freqs. |
91 | */ | 85 | */ |
92 | static unsigned int powersave_bias_target(struct cpufreq_policy *policy, | 86 | static unsigned int generic_powersave_bias_target(struct cpufreq_policy *policy, |
93 | unsigned int freq_next, unsigned int relation) | 87 | unsigned int freq_next, unsigned int relation) |
94 | { | 88 | { |
95 | unsigned int freq_req, freq_reduc, freq_avg; | 89 | unsigned int freq_req, freq_reduc, freq_avg; |
@@ -98,6 +92,8 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy, | |||
98 | unsigned int jiffies_total, jiffies_hi, jiffies_lo; | 92 | unsigned int jiffies_total, jiffies_hi, jiffies_lo; |
99 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, | 93 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, |
100 | policy->cpu); | 94 | policy->cpu); |
95 | struct dbs_data *dbs_data = policy->governor_data; | ||
96 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
101 | 97 | ||
102 | if (!dbs_info->freq_table) { | 98 | if (!dbs_info->freq_table) { |
103 | dbs_info->freq_lo = 0; | 99 | dbs_info->freq_lo = 0; |
@@ -108,7 +104,7 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy, | |||
108 | cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_next, | 104 | cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_next, |
109 | relation, &index); | 105 | relation, &index); |
110 | freq_req = dbs_info->freq_table[index].frequency; | 106 | freq_req = dbs_info->freq_table[index].frequency; |
111 | freq_reduc = freq_req * od_tuners.powersave_bias / 1000; | 107 | freq_reduc = freq_req * od_tuners->powersave_bias / 1000; |
112 | freq_avg = freq_req - freq_reduc; | 108 | freq_avg = freq_req - freq_reduc; |
113 | 109 | ||
114 | /* Find freq bounds for freq_avg in freq_table */ | 110 | /* Find freq bounds for freq_avg in freq_table */ |
@@ -127,7 +123,7 @@ static unsigned int powersave_bias_target(struct cpufreq_policy *policy, | |||
127 | dbs_info->freq_lo_jiffies = 0; | 123 | dbs_info->freq_lo_jiffies = 0; |
128 | return freq_lo; | 124 | return freq_lo; |
129 | } | 125 | } |
130 | jiffies_total = usecs_to_jiffies(od_tuners.sampling_rate); | 126 | jiffies_total = usecs_to_jiffies(od_tuners->sampling_rate); |
131 | jiffies_hi = (freq_avg - freq_lo) * jiffies_total; | 127 | jiffies_hi = (freq_avg - freq_lo) * jiffies_total; |
132 | jiffies_hi += ((freq_hi - freq_lo) / 2); | 128 | jiffies_hi += ((freq_hi - freq_lo) / 2); |
133 | jiffies_hi /= (freq_hi - freq_lo); | 129 | jiffies_hi /= (freq_hi - freq_lo); |
@@ -148,12 +144,16 @@ static void ondemand_powersave_bias_init(void) | |||
148 | 144 | ||
149 | static void dbs_freq_increase(struct cpufreq_policy *p, unsigned int freq) | 145 | static void dbs_freq_increase(struct cpufreq_policy *p, unsigned int freq) |
150 | { | 146 | { |
151 | if (od_tuners.powersave_bias) | 147 | struct dbs_data *dbs_data = p->governor_data; |
152 | freq = powersave_bias_target(p, freq, CPUFREQ_RELATION_H); | 148 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; |
149 | |||
150 | if (od_tuners->powersave_bias) | ||
151 | freq = od_ops.powersave_bias_target(p, freq, | ||
152 | CPUFREQ_RELATION_H); | ||
153 | else if (p->cur == p->max) | 153 | else if (p->cur == p->max) |
154 | return; | 154 | return; |
155 | 155 | ||
156 | __cpufreq_driver_target(p, freq, od_tuners.powersave_bias ? | 156 | __cpufreq_driver_target(p, freq, od_tuners->powersave_bias ? |
157 | CPUFREQ_RELATION_L : CPUFREQ_RELATION_H); | 157 | CPUFREQ_RELATION_L : CPUFREQ_RELATION_H); |
158 | } | 158 | } |
159 | 159 | ||
@@ -170,15 +170,17 @@ static void od_check_cpu(int cpu, unsigned int load_freq) | |||
170 | { | 170 | { |
171 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); | 171 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); |
172 | struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; | 172 | struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; |
173 | struct dbs_data *dbs_data = policy->governor_data; | ||
174 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
173 | 175 | ||
174 | dbs_info->freq_lo = 0; | 176 | dbs_info->freq_lo = 0; |
175 | 177 | ||
176 | /* Check for frequency increase */ | 178 | /* Check for frequency increase */ |
177 | if (load_freq > od_tuners.up_threshold * policy->cur) { | 179 | if (load_freq > od_tuners->up_threshold * policy->cur) { |
178 | /* If switching to max speed, apply sampling_down_factor */ | 180 | /* If switching to max speed, apply sampling_down_factor */ |
179 | if (policy->cur < policy->max) | 181 | if (policy->cur < policy->max) |
180 | dbs_info->rate_mult = | 182 | dbs_info->rate_mult = |
181 | od_tuners.sampling_down_factor; | 183 | od_tuners->sampling_down_factor; |
182 | dbs_freq_increase(policy, policy->max); | 184 | dbs_freq_increase(policy, policy->max); |
183 | return; | 185 | return; |
184 | } | 186 | } |
@@ -193,9 +195,10 @@ static void od_check_cpu(int cpu, unsigned int load_freq) | |||
193 | * support the current CPU usage without triggering the up policy. To be | 195 | * support the current CPU usage without triggering the up policy. To be |
194 | * safe, we focus 10 points under the threshold. | 196 | * safe, we focus 10 points under the threshold. |
195 | */ | 197 | */ |
196 | if (load_freq < od_tuners.adj_up_threshold * policy->cur) { | 198 | if (load_freq < od_tuners->adj_up_threshold |
199 | * policy->cur) { | ||
197 | unsigned int freq_next; | 200 | unsigned int freq_next; |
198 | freq_next = load_freq / od_tuners.adj_up_threshold; | 201 | freq_next = load_freq / od_tuners->adj_up_threshold; |
199 | 202 | ||
200 | /* No longer fully busy, reset rate_mult */ | 203 | /* No longer fully busy, reset rate_mult */ |
201 | dbs_info->rate_mult = 1; | 204 | dbs_info->rate_mult = 1; |
@@ -203,65 +206,62 @@ static void od_check_cpu(int cpu, unsigned int load_freq) | |||
203 | if (freq_next < policy->min) | 206 | if (freq_next < policy->min) |
204 | freq_next = policy->min; | 207 | freq_next = policy->min; |
205 | 208 | ||
206 | if (!od_tuners.powersave_bias) { | 209 | if (!od_tuners->powersave_bias) { |
207 | __cpufreq_driver_target(policy, freq_next, | 210 | __cpufreq_driver_target(policy, freq_next, |
208 | CPUFREQ_RELATION_L); | 211 | CPUFREQ_RELATION_L); |
209 | } else { | 212 | return; |
210 | int freq = powersave_bias_target(policy, freq_next, | ||
211 | CPUFREQ_RELATION_L); | ||
212 | __cpufreq_driver_target(policy, freq, | ||
213 | CPUFREQ_RELATION_L); | ||
214 | } | 213 | } |
214 | |||
215 | freq_next = od_ops.powersave_bias_target(policy, freq_next, | ||
216 | CPUFREQ_RELATION_L); | ||
217 | __cpufreq_driver_target(policy, freq_next, CPUFREQ_RELATION_L); | ||
215 | } | 218 | } |
216 | } | 219 | } |
217 | 220 | ||
218 | static void od_dbs_timer(struct work_struct *work) | 221 | static void od_dbs_timer(struct work_struct *work) |
219 | { | 222 | { |
220 | struct delayed_work *dw = to_delayed_work(work); | ||
221 | struct od_cpu_dbs_info_s *dbs_info = | 223 | struct od_cpu_dbs_info_s *dbs_info = |
222 | container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work); | 224 | container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work); |
223 | unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; | 225 | unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; |
224 | struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info, | 226 | struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info, |
225 | cpu); | 227 | cpu); |
226 | int delay, sample_type = core_dbs_info->sample_type; | 228 | struct dbs_data *dbs_data = dbs_info->cdbs.cur_policy->governor_data; |
227 | bool eval_load; | 229 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; |
230 | int delay = 0, sample_type = core_dbs_info->sample_type; | ||
231 | bool modify_all = true; | ||
228 | 232 | ||
229 | mutex_lock(&core_dbs_info->cdbs.timer_mutex); | 233 | mutex_lock(&core_dbs_info->cdbs.timer_mutex); |
230 | eval_load = need_load_eval(&core_dbs_info->cdbs, | 234 | if (!need_load_eval(&core_dbs_info->cdbs, od_tuners->sampling_rate)) { |
231 | od_tuners.sampling_rate); | 235 | modify_all = false; |
236 | goto max_delay; | ||
237 | } | ||
232 | 238 | ||
233 | /* Common NORMAL_SAMPLE setup */ | 239 | /* Common NORMAL_SAMPLE setup */ |
234 | core_dbs_info->sample_type = OD_NORMAL_SAMPLE; | 240 | core_dbs_info->sample_type = OD_NORMAL_SAMPLE; |
235 | if (sample_type == OD_SUB_SAMPLE) { | 241 | if (sample_type == OD_SUB_SAMPLE) { |
236 | delay = core_dbs_info->freq_lo_jiffies; | 242 | delay = core_dbs_info->freq_lo_jiffies; |
237 | if (eval_load) | 243 | __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy, |
238 | __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy, | 244 | core_dbs_info->freq_lo, CPUFREQ_RELATION_H); |
239 | core_dbs_info->freq_lo, | ||
240 | CPUFREQ_RELATION_H); | ||
241 | } else { | 245 | } else { |
242 | if (eval_load) | 246 | dbs_check_cpu(dbs_data, cpu); |
243 | dbs_check_cpu(&od_dbs_data, cpu); | ||
244 | if (core_dbs_info->freq_lo) { | 247 | if (core_dbs_info->freq_lo) { |
245 | /* Setup timer for SUB_SAMPLE */ | 248 | /* Setup timer for SUB_SAMPLE */ |
246 | core_dbs_info->sample_type = OD_SUB_SAMPLE; | 249 | core_dbs_info->sample_type = OD_SUB_SAMPLE; |
247 | delay = core_dbs_info->freq_hi_jiffies; | 250 | delay = core_dbs_info->freq_hi_jiffies; |
248 | } else { | ||
249 | delay = delay_for_sampling_rate(od_tuners.sampling_rate | ||
250 | * core_dbs_info->rate_mult); | ||
251 | } | 251 | } |
252 | } | 252 | } |
253 | 253 | ||
254 | schedule_delayed_work_on(smp_processor_id(), dw, delay); | 254 | max_delay: |
255 | if (!delay) | ||
256 | delay = delay_for_sampling_rate(od_tuners->sampling_rate | ||
257 | * core_dbs_info->rate_mult); | ||
258 | |||
259 | gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, delay, modify_all); | ||
255 | mutex_unlock(&core_dbs_info->cdbs.timer_mutex); | 260 | mutex_unlock(&core_dbs_info->cdbs.timer_mutex); |
256 | } | 261 | } |
257 | 262 | ||
258 | /************************** sysfs interface ************************/ | 263 | /************************** sysfs interface ************************/ |
259 | 264 | static struct common_dbs_data od_dbs_cdata; | |
260 | static ssize_t show_sampling_rate_min(struct kobject *kobj, | ||
261 | struct attribute *attr, char *buf) | ||
262 | { | ||
263 | return sprintf(buf, "%u\n", od_dbs_data.min_sampling_rate); | ||
264 | } | ||
265 | 265 | ||
266 | /** | 266 | /** |
267 | * update_sampling_rate - update sampling rate effective immediately if needed. | 267 | * update_sampling_rate - update sampling rate effective immediately if needed. |
@@ -276,12 +276,14 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj, | |||
276 | * reducing the sampling rate, we need to make the new value effective | 276 | * reducing the sampling rate, we need to make the new value effective |
277 | * immediately. | 277 | * immediately. |
278 | */ | 278 | */ |
279 | static void update_sampling_rate(unsigned int new_rate) | 279 | static void update_sampling_rate(struct dbs_data *dbs_data, |
280 | unsigned int new_rate) | ||
280 | { | 281 | { |
282 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
281 | int cpu; | 283 | int cpu; |
282 | 284 | ||
283 | od_tuners.sampling_rate = new_rate = max(new_rate, | 285 | od_tuners->sampling_rate = new_rate = max(new_rate, |
284 | od_dbs_data.min_sampling_rate); | 286 | dbs_data->min_sampling_rate); |
285 | 287 | ||
286 | for_each_online_cpu(cpu) { | 288 | for_each_online_cpu(cpu) { |
287 | struct cpufreq_policy *policy; | 289 | struct cpufreq_policy *policy; |
@@ -314,42 +316,54 @@ static void update_sampling_rate(unsigned int new_rate) | |||
314 | cancel_delayed_work_sync(&dbs_info->cdbs.work); | 316 | cancel_delayed_work_sync(&dbs_info->cdbs.work); |
315 | mutex_lock(&dbs_info->cdbs.timer_mutex); | 317 | mutex_lock(&dbs_info->cdbs.timer_mutex); |
316 | 318 | ||
317 | schedule_delayed_work_on(cpu, &dbs_info->cdbs.work, | 319 | gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, |
318 | usecs_to_jiffies(new_rate)); | 320 | usecs_to_jiffies(new_rate), true); |
319 | 321 | ||
320 | } | 322 | } |
321 | mutex_unlock(&dbs_info->cdbs.timer_mutex); | 323 | mutex_unlock(&dbs_info->cdbs.timer_mutex); |
322 | } | 324 | } |
323 | } | 325 | } |
324 | 326 | ||
325 | static ssize_t store_sampling_rate(struct kobject *a, struct attribute *b, | 327 | static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf, |
326 | const char *buf, size_t count) | 328 | size_t count) |
327 | { | 329 | { |
328 | unsigned int input; | 330 | unsigned int input; |
329 | int ret; | 331 | int ret; |
330 | ret = sscanf(buf, "%u", &input); | 332 | ret = sscanf(buf, "%u", &input); |
331 | if (ret != 1) | 333 | if (ret != 1) |
332 | return -EINVAL; | 334 | return -EINVAL; |
333 | update_sampling_rate(input); | 335 | |
336 | update_sampling_rate(dbs_data, input); | ||
334 | return count; | 337 | return count; |
335 | } | 338 | } |
336 | 339 | ||
337 | static ssize_t store_io_is_busy(struct kobject *a, struct attribute *b, | 340 | static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf, |
338 | const char *buf, size_t count) | 341 | size_t count) |
339 | { | 342 | { |
343 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
340 | unsigned int input; | 344 | unsigned int input; |
341 | int ret; | 345 | int ret; |
346 | unsigned int j; | ||
342 | 347 | ||
343 | ret = sscanf(buf, "%u", &input); | 348 | ret = sscanf(buf, "%u", &input); |
344 | if (ret != 1) | 349 | if (ret != 1) |
345 | return -EINVAL; | 350 | return -EINVAL; |
346 | od_tuners.io_is_busy = !!input; | 351 | od_tuners->io_is_busy = !!input; |
352 | |||
353 | /* we need to re-evaluate prev_cpu_idle */ | ||
354 | for_each_online_cpu(j) { | ||
355 | struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, | ||
356 | j); | ||
357 | dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j, | ||
358 | &dbs_info->cdbs.prev_cpu_wall, od_tuners->io_is_busy); | ||
359 | } | ||
347 | return count; | 360 | return count; |
348 | } | 361 | } |
349 | 362 | ||
350 | static ssize_t store_up_threshold(struct kobject *a, struct attribute *b, | 363 | static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, |
351 | const char *buf, size_t count) | 364 | size_t count) |
352 | { | 365 | { |
366 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
353 | unsigned int input; | 367 | unsigned int input; |
354 | int ret; | 368 | int ret; |
355 | ret = sscanf(buf, "%u", &input); | 369 | ret = sscanf(buf, "%u", &input); |
@@ -359,23 +373,24 @@ static ssize_t store_up_threshold(struct kobject *a, struct attribute *b, | |||
359 | return -EINVAL; | 373 | return -EINVAL; |
360 | } | 374 | } |
361 | /* Calculate the new adj_up_threshold */ | 375 | /* Calculate the new adj_up_threshold */ |
362 | od_tuners.adj_up_threshold += input; | 376 | od_tuners->adj_up_threshold += input; |
363 | od_tuners.adj_up_threshold -= od_tuners.up_threshold; | 377 | od_tuners->adj_up_threshold -= od_tuners->up_threshold; |
364 | 378 | ||
365 | od_tuners.up_threshold = input; | 379 | od_tuners->up_threshold = input; |
366 | return count; | 380 | return count; |
367 | } | 381 | } |
368 | 382 | ||
369 | static ssize_t store_sampling_down_factor(struct kobject *a, | 383 | static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, |
370 | struct attribute *b, const char *buf, size_t count) | 384 | const char *buf, size_t count) |
371 | { | 385 | { |
386 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
372 | unsigned int input, j; | 387 | unsigned int input, j; |
373 | int ret; | 388 | int ret; |
374 | ret = sscanf(buf, "%u", &input); | 389 | ret = sscanf(buf, "%u", &input); |
375 | 390 | ||
376 | if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1) | 391 | if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1) |
377 | return -EINVAL; | 392 | return -EINVAL; |
378 | od_tuners.sampling_down_factor = input; | 393 | od_tuners->sampling_down_factor = input; |
379 | 394 | ||
380 | /* Reset down sampling multiplier in case it was active */ | 395 | /* Reset down sampling multiplier in case it was active */ |
381 | for_each_online_cpu(j) { | 396 | for_each_online_cpu(j) { |
@@ -386,9 +401,10 @@ static ssize_t store_sampling_down_factor(struct kobject *a, | |||
386 | return count; | 401 | return count; |
387 | } | 402 | } |
388 | 403 | ||
389 | static ssize_t store_ignore_nice_load(struct kobject *a, struct attribute *b, | 404 | static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf, |
390 | const char *buf, size_t count) | 405 | size_t count) |
391 | { | 406 | { |
407 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
392 | unsigned int input; | 408 | unsigned int input; |
393 | int ret; | 409 | int ret; |
394 | 410 | ||
@@ -401,18 +417,18 @@ static ssize_t store_ignore_nice_load(struct kobject *a, struct attribute *b, | |||
401 | if (input > 1) | 417 | if (input > 1) |
402 | input = 1; | 418 | input = 1; |
403 | 419 | ||
404 | if (input == od_tuners.ignore_nice) { /* nothing to do */ | 420 | if (input == od_tuners->ignore_nice) { /* nothing to do */ |
405 | return count; | 421 | return count; |
406 | } | 422 | } |
407 | od_tuners.ignore_nice = input; | 423 | od_tuners->ignore_nice = input; |
408 | 424 | ||
409 | /* we need to re-evaluate prev_cpu_idle */ | 425 | /* we need to re-evaluate prev_cpu_idle */ |
410 | for_each_online_cpu(j) { | 426 | for_each_online_cpu(j) { |
411 | struct od_cpu_dbs_info_s *dbs_info; | 427 | struct od_cpu_dbs_info_s *dbs_info; |
412 | dbs_info = &per_cpu(od_cpu_dbs_info, j); | 428 | dbs_info = &per_cpu(od_cpu_dbs_info, j); |
413 | dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j, | 429 | dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j, |
414 | &dbs_info->cdbs.prev_cpu_wall); | 430 | &dbs_info->cdbs.prev_cpu_wall, od_tuners->io_is_busy); |
415 | if (od_tuners.ignore_nice) | 431 | if (od_tuners->ignore_nice) |
416 | dbs_info->cdbs.prev_cpu_nice = | 432 | dbs_info->cdbs.prev_cpu_nice = |
417 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; | 433 | kcpustat_cpu(j).cpustat[CPUTIME_NICE]; |
418 | 434 | ||
@@ -420,9 +436,10 @@ static ssize_t store_ignore_nice_load(struct kobject *a, struct attribute *b, | |||
420 | return count; | 436 | return count; |
421 | } | 437 | } |
422 | 438 | ||
423 | static ssize_t store_powersave_bias(struct kobject *a, struct attribute *b, | 439 | static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf, |
424 | const char *buf, size_t count) | 440 | size_t count) |
425 | { | 441 | { |
442 | struct od_dbs_tuners *od_tuners = dbs_data->tuners; | ||
426 | unsigned int input; | 443 | unsigned int input; |
427 | int ret; | 444 | int ret; |
428 | ret = sscanf(buf, "%u", &input); | 445 | ret = sscanf(buf, "%u", &input); |
@@ -433,68 +450,179 @@ static ssize_t store_powersave_bias(struct kobject *a, struct attribute *b, | |||
433 | if (input > 1000) | 450 | if (input > 1000) |
434 | input = 1000; | 451 | input = 1000; |
435 | 452 | ||
436 | od_tuners.powersave_bias = input; | 453 | od_tuners->powersave_bias = input; |
437 | ondemand_powersave_bias_init(); | 454 | ondemand_powersave_bias_init(); |
438 | return count; | 455 | return count; |
439 | } | 456 | } |
440 | 457 | ||
441 | show_one(od, sampling_rate, sampling_rate); | 458 | show_store_one(od, sampling_rate); |
442 | show_one(od, io_is_busy, io_is_busy); | 459 | show_store_one(od, io_is_busy); |
443 | show_one(od, up_threshold, up_threshold); | 460 | show_store_one(od, up_threshold); |
444 | show_one(od, sampling_down_factor, sampling_down_factor); | 461 | show_store_one(od, sampling_down_factor); |
445 | show_one(od, ignore_nice_load, ignore_nice); | 462 | show_store_one(od, ignore_nice); |
446 | show_one(od, powersave_bias, powersave_bias); | 463 | show_store_one(od, powersave_bias); |
447 | 464 | declare_show_sampling_rate_min(od); | |
448 | define_one_global_rw(sampling_rate); | 465 | |
449 | define_one_global_rw(io_is_busy); | 466 | gov_sys_pol_attr_rw(sampling_rate); |
450 | define_one_global_rw(up_threshold); | 467 | gov_sys_pol_attr_rw(io_is_busy); |
451 | define_one_global_rw(sampling_down_factor); | 468 | gov_sys_pol_attr_rw(up_threshold); |
452 | define_one_global_rw(ignore_nice_load); | 469 | gov_sys_pol_attr_rw(sampling_down_factor); |
453 | define_one_global_rw(powersave_bias); | 470 | gov_sys_pol_attr_rw(ignore_nice); |
454 | define_one_global_ro(sampling_rate_min); | 471 | gov_sys_pol_attr_rw(powersave_bias); |
455 | 472 | gov_sys_pol_attr_ro(sampling_rate_min); | |
456 | static struct attribute *dbs_attributes[] = { | 473 | |
457 | &sampling_rate_min.attr, | 474 | static struct attribute *dbs_attributes_gov_sys[] = { |
458 | &sampling_rate.attr, | 475 | &sampling_rate_min_gov_sys.attr, |
459 | &up_threshold.attr, | 476 | &sampling_rate_gov_sys.attr, |
460 | &sampling_down_factor.attr, | 477 | &up_threshold_gov_sys.attr, |
461 | &ignore_nice_load.attr, | 478 | &sampling_down_factor_gov_sys.attr, |
462 | &powersave_bias.attr, | 479 | &ignore_nice_gov_sys.attr, |
463 | &io_is_busy.attr, | 480 | &powersave_bias_gov_sys.attr, |
481 | &io_is_busy_gov_sys.attr, | ||
482 | NULL | ||
483 | }; | ||
484 | |||
485 | static struct attribute_group od_attr_group_gov_sys = { | ||
486 | .attrs = dbs_attributes_gov_sys, | ||
487 | .name = "ondemand", | ||
488 | }; | ||
489 | |||
490 | static struct attribute *dbs_attributes_gov_pol[] = { | ||
491 | &sampling_rate_min_gov_pol.attr, | ||
492 | &sampling_rate_gov_pol.attr, | ||
493 | &up_threshold_gov_pol.attr, | ||
494 | &sampling_down_factor_gov_pol.attr, | ||
495 | &ignore_nice_gov_pol.attr, | ||
496 | &powersave_bias_gov_pol.attr, | ||
497 | &io_is_busy_gov_pol.attr, | ||
464 | NULL | 498 | NULL |
465 | }; | 499 | }; |
466 | 500 | ||
467 | static struct attribute_group od_attr_group = { | 501 | static struct attribute_group od_attr_group_gov_pol = { |
468 | .attrs = dbs_attributes, | 502 | .attrs = dbs_attributes_gov_pol, |
469 | .name = "ondemand", | 503 | .name = "ondemand", |
470 | }; | 504 | }; |
471 | 505 | ||
472 | /************************** sysfs end ************************/ | 506 | /************************** sysfs end ************************/ |
473 | 507 | ||
508 | static int od_init(struct dbs_data *dbs_data) | ||
509 | { | ||
510 | struct od_dbs_tuners *tuners; | ||
511 | u64 idle_time; | ||
512 | int cpu; | ||
513 | |||
514 | tuners = kzalloc(sizeof(struct od_dbs_tuners), GFP_KERNEL); | ||
515 | if (!tuners) { | ||
516 | pr_err("%s: kzalloc failed\n", __func__); | ||
517 | return -ENOMEM; | ||
518 | } | ||
519 | |||
520 | cpu = get_cpu(); | ||
521 | idle_time = get_cpu_idle_time_us(cpu, NULL); | ||
522 | put_cpu(); | ||
523 | if (idle_time != -1ULL) { | ||
524 | /* Idle micro accounting is supported. Use finer thresholds */ | ||
525 | tuners->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; | ||
526 | tuners->adj_up_threshold = MICRO_FREQUENCY_UP_THRESHOLD - | ||
527 | MICRO_FREQUENCY_DOWN_DIFFERENTIAL; | ||
528 | /* | ||
529 | * In nohz/micro accounting case we set the minimum frequency | ||
530 | * not depending on HZ, but fixed (very low). The deferred | ||
531 | * timer might skip some samples if idle/sleeping as needed. | ||
532 | */ | ||
533 | dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE; | ||
534 | } else { | ||
535 | tuners->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; | ||
536 | tuners->adj_up_threshold = DEF_FREQUENCY_UP_THRESHOLD - | ||
537 | DEF_FREQUENCY_DOWN_DIFFERENTIAL; | ||
538 | |||
539 | /* For correct statistics, we need 10 ticks for each measure */ | ||
540 | dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * | ||
541 | jiffies_to_usecs(10); | ||
542 | } | ||
543 | |||
544 | tuners->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; | ||
545 | tuners->ignore_nice = 0; | ||
546 | tuners->powersave_bias = 0; | ||
547 | tuners->io_is_busy = should_io_be_busy(); | ||
548 | |||
549 | dbs_data->tuners = tuners; | ||
550 | pr_info("%s: tuners %p\n", __func__, tuners); | ||
551 | mutex_init(&dbs_data->mutex); | ||
552 | return 0; | ||
553 | } | ||
554 | |||
555 | static void od_exit(struct dbs_data *dbs_data) | ||
556 | { | ||
557 | kfree(dbs_data->tuners); | ||
558 | } | ||
559 | |||
474 | define_get_cpu_dbs_routines(od_cpu_dbs_info); | 560 | define_get_cpu_dbs_routines(od_cpu_dbs_info); |
475 | 561 | ||
476 | static struct od_ops od_ops = { | 562 | static struct od_ops od_ops = { |
477 | .io_busy = should_io_be_busy, | ||
478 | .powersave_bias_init_cpu = ondemand_powersave_bias_init_cpu, | 563 | .powersave_bias_init_cpu = ondemand_powersave_bias_init_cpu, |
479 | .powersave_bias_target = powersave_bias_target, | 564 | .powersave_bias_target = generic_powersave_bias_target, |
480 | .freq_increase = dbs_freq_increase, | 565 | .freq_increase = dbs_freq_increase, |
481 | }; | 566 | }; |
482 | 567 | ||
483 | static struct dbs_data od_dbs_data = { | 568 | static struct common_dbs_data od_dbs_cdata = { |
484 | .governor = GOV_ONDEMAND, | 569 | .governor = GOV_ONDEMAND, |
485 | .attr_group = &od_attr_group, | 570 | .attr_group_gov_sys = &od_attr_group_gov_sys, |
486 | .tuners = &od_tuners, | 571 | .attr_group_gov_pol = &od_attr_group_gov_pol, |
487 | .get_cpu_cdbs = get_cpu_cdbs, | 572 | .get_cpu_cdbs = get_cpu_cdbs, |
488 | .get_cpu_dbs_info_s = get_cpu_dbs_info_s, | 573 | .get_cpu_dbs_info_s = get_cpu_dbs_info_s, |
489 | .gov_dbs_timer = od_dbs_timer, | 574 | .gov_dbs_timer = od_dbs_timer, |
490 | .gov_check_cpu = od_check_cpu, | 575 | .gov_check_cpu = od_check_cpu, |
491 | .gov_ops = &od_ops, | 576 | .gov_ops = &od_ops, |
577 | .init = od_init, | ||
578 | .exit = od_exit, | ||
492 | }; | 579 | }; |
493 | 580 | ||
581 | static void od_set_powersave_bias(unsigned int powersave_bias) | ||
582 | { | ||
583 | struct cpufreq_policy *policy; | ||
584 | struct dbs_data *dbs_data; | ||
585 | struct od_dbs_tuners *od_tuners; | ||
586 | unsigned int cpu; | ||
587 | cpumask_t done; | ||
588 | |||
589 | cpumask_clear(&done); | ||
590 | |||
591 | get_online_cpus(); | ||
592 | for_each_online_cpu(cpu) { | ||
593 | if (cpumask_test_cpu(cpu, &done)) | ||
594 | continue; | ||
595 | |||
596 | policy = per_cpu(od_cpu_dbs_info, cpu).cdbs.cur_policy; | ||
597 | dbs_data = policy->governor_data; | ||
598 | od_tuners = dbs_data->tuners; | ||
599 | od_tuners->powersave_bias = powersave_bias; | ||
600 | |||
601 | cpumask_or(&done, &done, policy->cpus); | ||
602 | } | ||
603 | put_online_cpus(); | ||
604 | } | ||
605 | |||
606 | void od_register_powersave_bias_handler(unsigned int (*f) | ||
607 | (struct cpufreq_policy *, unsigned int, unsigned int), | ||
608 | unsigned int powersave_bias) | ||
609 | { | ||
610 | od_ops.powersave_bias_target = f; | ||
611 | od_set_powersave_bias(powersave_bias); | ||
612 | } | ||
613 | EXPORT_SYMBOL_GPL(od_register_powersave_bias_handler); | ||
614 | |||
615 | void od_unregister_powersave_bias_handler(void) | ||
616 | { | ||
617 | od_ops.powersave_bias_target = generic_powersave_bias_target; | ||
618 | od_set_powersave_bias(0); | ||
619 | } | ||
620 | EXPORT_SYMBOL_GPL(od_unregister_powersave_bias_handler); | ||
621 | |||
494 | static int od_cpufreq_governor_dbs(struct cpufreq_policy *policy, | 622 | static int od_cpufreq_governor_dbs(struct cpufreq_policy *policy, |
495 | unsigned int event) | 623 | unsigned int event) |
496 | { | 624 | { |
497 | return cpufreq_governor_dbs(&od_dbs_data, policy, event); | 625 | return cpufreq_governor_dbs(policy, &od_dbs_cdata, event); |
498 | } | 626 | } |
499 | 627 | ||
500 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND | 628 | #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND |
@@ -509,29 +637,6 @@ struct cpufreq_governor cpufreq_gov_ondemand = { | |||
509 | 637 | ||
510 | static int __init cpufreq_gov_dbs_init(void) | 638 | static int __init cpufreq_gov_dbs_init(void) |
511 | { | 639 | { |
512 | u64 idle_time; | ||
513 | int cpu = get_cpu(); | ||
514 | |||
515 | mutex_init(&od_dbs_data.mutex); | ||
516 | idle_time = get_cpu_idle_time_us(cpu, NULL); | ||
517 | put_cpu(); | ||
518 | if (idle_time != -1ULL) { | ||
519 | /* Idle micro accounting is supported. Use finer thresholds */ | ||
520 | od_tuners.up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; | ||
521 | od_tuners.adj_up_threshold = MICRO_FREQUENCY_UP_THRESHOLD - | ||
522 | MICRO_FREQUENCY_DOWN_DIFFERENTIAL; | ||
523 | /* | ||
524 | * In nohz/micro accounting case we set the minimum frequency | ||
525 | * not depending on HZ, but fixed (very low). The deferred | ||
526 | * timer might skip some samples if idle/sleeping as needed. | ||
527 | */ | ||
528 | od_dbs_data.min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE; | ||
529 | } else { | ||
530 | /* For correct statistics, we need 10 ticks for each measure */ | ||
531 | od_dbs_data.min_sampling_rate = MIN_SAMPLING_RATE_RATIO * | ||
532 | jiffies_to_usecs(10); | ||
533 | } | ||
534 | |||
535 | return cpufreq_register_governor(&cpufreq_gov_ondemand); | 640 | return cpufreq_register_governor(&cpufreq_gov_ondemand); |
536 | } | 641 | } |
537 | 642 | ||
diff --git a/arch/cris/arch-v32/mach-a3/cpufreq.c b/drivers/cpufreq/cris-artpec3-cpufreq.c index ee391ecb5bc9..ee142c490575 100644 --- a/arch/cris/arch-v32/mach-a3/cpufreq.c +++ b/drivers/cpufreq/cris-artpec3-cpufreq.c | |||
@@ -27,23 +27,17 @@ static unsigned int cris_freq_get_cpu_frequency(unsigned int cpu) | |||
27 | return clk_ctrl.pll ? 200000 : 6000; | 27 | return clk_ctrl.pll ? 200000 : 6000; |
28 | } | 28 | } |
29 | 29 | ||
30 | static void cris_freq_set_cpu_state(unsigned int state) | 30 | static void cris_freq_set_cpu_state(struct cpufreq_policy *policy, |
31 | unsigned int state) | ||
31 | { | 32 | { |
32 | int i = 0; | ||
33 | struct cpufreq_freqs freqs; | 33 | struct cpufreq_freqs freqs; |
34 | reg_clkgen_rw_clk_ctrl clk_ctrl; | 34 | reg_clkgen_rw_clk_ctrl clk_ctrl; |
35 | clk_ctrl = REG_RD(clkgen, regi_clkgen, rw_clk_ctrl); | 35 | clk_ctrl = REG_RD(clkgen, regi_clkgen, rw_clk_ctrl); |
36 | 36 | ||
37 | #ifdef CONFIG_SMP | 37 | freqs.old = cris_freq_get_cpu_frequency(policy->cpu); |
38 | for_each_present_cpu(i) | 38 | freqs.new = cris_freq_table[state].frequency; |
39 | #endif | ||
40 | { | ||
41 | freqs.old = cris_freq_get_cpu_frequency(i); | ||
42 | freqs.new = cris_freq_table[state].frequency; | ||
43 | freqs.cpu = i; | ||
44 | } | ||
45 | 39 | ||
46 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 40 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
47 | 41 | ||
48 | local_irq_disable(); | 42 | local_irq_disable(); |
49 | 43 | ||
@@ -57,7 +51,7 @@ static void cris_freq_set_cpu_state(unsigned int state) | |||
57 | 51 | ||
58 | local_irq_enable(); | 52 | local_irq_enable(); |
59 | 53 | ||
60 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 54 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
61 | }; | 55 | }; |
62 | 56 | ||
63 | static int cris_freq_verify(struct cpufreq_policy *policy) | 57 | static int cris_freq_verify(struct cpufreq_policy *policy) |
@@ -75,7 +69,7 @@ static int cris_freq_target(struct cpufreq_policy *policy, | |||
75 | target_freq, relation, &newstate)) | 69 | target_freq, relation, &newstate)) |
76 | return -EINVAL; | 70 | return -EINVAL; |
77 | 71 | ||
78 | cris_freq_set_cpu_state(newstate); | 72 | cris_freq_set_cpu_state(policy, newstate); |
79 | 73 | ||
80 | return 0; | 74 | return 0; |
81 | } | 75 | } |
diff --git a/arch/cris/arch-v32/mach-fs/cpufreq.c b/drivers/cpufreq/cris-etraxfs-cpufreq.c index d92cf70d1cbe..12952235d5db 100644 --- a/arch/cris/arch-v32/mach-fs/cpufreq.c +++ b/drivers/cpufreq/cris-etraxfs-cpufreq.c | |||
@@ -27,20 +27,17 @@ static unsigned int cris_freq_get_cpu_frequency(unsigned int cpu) | |||
27 | return clk_ctrl.pll ? 200000 : 6000; | 27 | return clk_ctrl.pll ? 200000 : 6000; |
28 | } | 28 | } |
29 | 29 | ||
30 | static void cris_freq_set_cpu_state(unsigned int state) | 30 | static void cris_freq_set_cpu_state(struct cpufreq_policy *policy, |
31 | unsigned int state) | ||
31 | { | 32 | { |
32 | int i; | ||
33 | struct cpufreq_freqs freqs; | 33 | struct cpufreq_freqs freqs; |
34 | reg_config_rw_clk_ctrl clk_ctrl; | 34 | reg_config_rw_clk_ctrl clk_ctrl; |
35 | clk_ctrl = REG_RD(config, regi_config, rw_clk_ctrl); | 35 | clk_ctrl = REG_RD(config, regi_config, rw_clk_ctrl); |
36 | 36 | ||
37 | for_each_possible_cpu(i) { | 37 | freqs.old = cris_freq_get_cpu_frequency(policy->cpu); |
38 | freqs.old = cris_freq_get_cpu_frequency(i); | 38 | freqs.new = cris_freq_table[state].frequency; |
39 | freqs.new = cris_freq_table[state].frequency; | ||
40 | freqs.cpu = i; | ||
41 | } | ||
42 | 39 | ||
43 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 40 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
44 | 41 | ||
45 | local_irq_disable(); | 42 | local_irq_disable(); |
46 | 43 | ||
@@ -54,7 +51,7 @@ static void cris_freq_set_cpu_state(unsigned int state) | |||
54 | 51 | ||
55 | local_irq_enable(); | 52 | local_irq_enable(); |
56 | 53 | ||
57 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 54 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
58 | }; | 55 | }; |
59 | 56 | ||
60 | static int cris_freq_verify(struct cpufreq_policy *policy) | 57 | static int cris_freq_verify(struct cpufreq_policy *policy) |
@@ -71,7 +68,7 @@ static int cris_freq_target(struct cpufreq_policy *policy, | |||
71 | (policy, cris_freq_table, target_freq, relation, &newstate)) | 68 | (policy, cris_freq_table, target_freq, relation, &newstate)) |
72 | return -EINVAL; | 69 | return -EINVAL; |
73 | 70 | ||
74 | cris_freq_set_cpu_state(newstate); | 71 | cris_freq_set_cpu_state(policy, newstate); |
75 | 72 | ||
76 | return 0; | 73 | return 0; |
77 | } | 74 | } |
diff --git a/arch/arm/mach-davinci/cpufreq.c b/drivers/cpufreq/davinci-cpufreq.c index 4729eaab0f40..c33c76c360fa 100644 --- a/arch/arm/mach-davinci/cpufreq.c +++ b/drivers/cpufreq/davinci-cpufreq.c | |||
@@ -30,8 +30,6 @@ | |||
30 | #include <mach/cpufreq.h> | 30 | #include <mach/cpufreq.h> |
31 | #include <mach/common.h> | 31 | #include <mach/common.h> |
32 | 32 | ||
33 | #include "clock.h" | ||
34 | |||
35 | struct davinci_cpufreq { | 33 | struct davinci_cpufreq { |
36 | struct device *dev; | 34 | struct device *dev; |
37 | struct clk *armclk; | 35 | struct clk *armclk; |
@@ -79,18 +77,8 @@ static int davinci_target(struct cpufreq_policy *policy, | |||
79 | struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; | 77 | struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; |
80 | struct clk *armclk = cpufreq.armclk; | 78 | struct clk *armclk = cpufreq.armclk; |
81 | 79 | ||
82 | /* | ||
83 | * Ensure desired rate is within allowed range. Some govenors | ||
84 | * (ondemand) will just pass target_freq=0 to get the minimum. | ||
85 | */ | ||
86 | if (target_freq < policy->cpuinfo.min_freq) | ||
87 | target_freq = policy->cpuinfo.min_freq; | ||
88 | if (target_freq > policy->cpuinfo.max_freq) | ||
89 | target_freq = policy->cpuinfo.max_freq; | ||
90 | |||
91 | freqs.old = davinci_getspeed(0); | 80 | freqs.old = davinci_getspeed(0); |
92 | freqs.new = clk_round_rate(armclk, target_freq * 1000) / 1000; | 81 | freqs.new = clk_round_rate(armclk, target_freq * 1000) / 1000; |
93 | freqs.cpu = 0; | ||
94 | 82 | ||
95 | if (freqs.old == freqs.new) | 83 | if (freqs.old == freqs.new) |
96 | return ret; | 84 | return ret; |
@@ -102,7 +90,7 @@ static int davinci_target(struct cpufreq_policy *policy, | |||
102 | if (ret) | 90 | if (ret) |
103 | return -EINVAL; | 91 | return -EINVAL; |
104 | 92 | ||
105 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 93 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
106 | 94 | ||
107 | /* if moving to higher frequency, up the voltage beforehand */ | 95 | /* if moving to higher frequency, up the voltage beforehand */ |
108 | if (pdata->set_voltage && freqs.new > freqs.old) { | 96 | if (pdata->set_voltage && freqs.new > freqs.old) { |
@@ -126,7 +114,7 @@ static int davinci_target(struct cpufreq_policy *policy, | |||
126 | pdata->set_voltage(idx); | 114 | pdata->set_voltage(idx); |
127 | 115 | ||
128 | out: | 116 | out: |
129 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 117 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
130 | 118 | ||
131 | return ret; | 119 | return ret; |
132 | } | 120 | } |
@@ -147,21 +135,16 @@ static int davinci_cpu_init(struct cpufreq_policy *policy) | |||
147 | return result; | 135 | return result; |
148 | } | 136 | } |
149 | 137 | ||
150 | policy->cur = policy->min = policy->max = davinci_getspeed(0); | 138 | policy->cur = davinci_getspeed(0); |
151 | 139 | ||
152 | if (freq_table) { | 140 | result = cpufreq_frequency_table_cpuinfo(policy, freq_table); |
153 | result = cpufreq_frequency_table_cpuinfo(policy, freq_table); | 141 | if (result) { |
154 | if (!result) | 142 | pr_err("%s: cpufreq_frequency_table_cpuinfo() failed", |
155 | cpufreq_frequency_table_get_attr(freq_table, | 143 | __func__); |
156 | policy->cpu); | 144 | return result; |
157 | } else { | ||
158 | policy->cpuinfo.min_freq = policy->min; | ||
159 | policy->cpuinfo.max_freq = policy->max; | ||
160 | } | 145 | } |
161 | 146 | ||
162 | policy->min = policy->cpuinfo.min_freq; | 147 | cpufreq_frequency_table_get_attr(freq_table, policy->cpu); |
163 | policy->max = policy->cpuinfo.max_freq; | ||
164 | policy->cur = davinci_getspeed(0); | ||
165 | 148 | ||
166 | /* | 149 | /* |
167 | * Time measurement across the target() function yields ~1500-1800us | 150 | * Time measurement across the target() function yields ~1500-1800us |
diff --git a/drivers/cpufreq/dbx500-cpufreq.c b/drivers/cpufreq/dbx500-cpufreq.c index 72f0c3efa76e..6ec6539ae041 100644 --- a/drivers/cpufreq/dbx500-cpufreq.c +++ b/drivers/cpufreq/dbx500-cpufreq.c | |||
@@ -37,12 +37,6 @@ static int dbx500_cpufreq_target(struct cpufreq_policy *policy, | |||
37 | unsigned int idx; | 37 | unsigned int idx; |
38 | int ret; | 38 | int ret; |
39 | 39 | ||
40 | /* scale the target frequency to one of the extremes supported */ | ||
41 | if (target_freq < policy->cpuinfo.min_freq) | ||
42 | target_freq = policy->cpuinfo.min_freq; | ||
43 | if (target_freq > policy->cpuinfo.max_freq) | ||
44 | target_freq = policy->cpuinfo.max_freq; | ||
45 | |||
46 | /* Lookup the next frequency */ | 40 | /* Lookup the next frequency */ |
47 | if (cpufreq_frequency_table_target(policy, freq_table, target_freq, | 41 | if (cpufreq_frequency_table_target(policy, freq_table, target_freq, |
48 | relation, &idx)) | 42 | relation, &idx)) |
@@ -55,8 +49,7 @@ static int dbx500_cpufreq_target(struct cpufreq_policy *policy, | |||
55 | return 0; | 49 | return 0; |
56 | 50 | ||
57 | /* pre-change notification */ | 51 | /* pre-change notification */ |
58 | for_each_cpu(freqs.cpu, policy->cpus) | 52 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
59 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
60 | 53 | ||
61 | /* update armss clk frequency */ | 54 | /* update armss clk frequency */ |
62 | ret = clk_set_rate(armss_clk, freqs.new * 1000); | 55 | ret = clk_set_rate(armss_clk, freqs.new * 1000); |
@@ -68,8 +61,7 @@ static int dbx500_cpufreq_target(struct cpufreq_policy *policy, | |||
68 | } | 61 | } |
69 | 62 | ||
70 | /* post change notification */ | 63 | /* post change notification */ |
71 | for_each_cpu(freqs.cpu, policy->cpus) | 64 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
72 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
73 | 65 | ||
74 | return 0; | 66 | return 0; |
75 | } | 67 | } |
@@ -79,15 +71,15 @@ static unsigned int dbx500_cpufreq_getspeed(unsigned int cpu) | |||
79 | int i = 0; | 71 | int i = 0; |
80 | unsigned long freq = clk_get_rate(armss_clk) / 1000; | 72 | unsigned long freq = clk_get_rate(armss_clk) / 1000; |
81 | 73 | ||
82 | while (freq_table[i].frequency != CPUFREQ_TABLE_END) { | 74 | /* The value is rounded to closest frequency in the defined table. */ |
83 | if (freq <= freq_table[i].frequency) | 75 | while (freq_table[i + 1].frequency != CPUFREQ_TABLE_END) { |
76 | if (freq < freq_table[i].frequency + | ||
77 | (freq_table[i + 1].frequency - freq_table[i].frequency) / 2) | ||
84 | return freq_table[i].frequency; | 78 | return freq_table[i].frequency; |
85 | i++; | 79 | i++; |
86 | } | 80 | } |
87 | 81 | ||
88 | /* We could not find a corresponding frequency. */ | 82 | return freq_table[i].frequency; |
89 | pr_err("dbx500-cpufreq: Failed to find cpufreq speed\n"); | ||
90 | return 0; | ||
91 | } | 83 | } |
92 | 84 | ||
93 | static int __cpuinit dbx500_cpufreq_init(struct cpufreq_policy *policy) | 85 | static int __cpuinit dbx500_cpufreq_init(struct cpufreq_policy *policy) |
diff --git a/drivers/cpufreq/e_powersaver.c b/drivers/cpufreq/e_powersaver.c index 3fffbe6025cd..37380fb92621 100644 --- a/drivers/cpufreq/e_powersaver.c +++ b/drivers/cpufreq/e_powersaver.c | |||
@@ -104,7 +104,7 @@ static unsigned int eps_get(unsigned int cpu) | |||
104 | } | 104 | } |
105 | 105 | ||
106 | static int eps_set_state(struct eps_cpu_data *centaur, | 106 | static int eps_set_state(struct eps_cpu_data *centaur, |
107 | unsigned int cpu, | 107 | struct cpufreq_policy *policy, |
108 | u32 dest_state) | 108 | u32 dest_state) |
109 | { | 109 | { |
110 | struct cpufreq_freqs freqs; | 110 | struct cpufreq_freqs freqs; |
@@ -112,10 +112,9 @@ static int eps_set_state(struct eps_cpu_data *centaur, | |||
112 | int err = 0; | 112 | int err = 0; |
113 | int i; | 113 | int i; |
114 | 114 | ||
115 | freqs.old = eps_get(cpu); | 115 | freqs.old = eps_get(policy->cpu); |
116 | freqs.new = centaur->fsb * ((dest_state >> 8) & 0xff); | 116 | freqs.new = centaur->fsb * ((dest_state >> 8) & 0xff); |
117 | freqs.cpu = cpu; | 117 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
118 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
119 | 118 | ||
120 | /* Wait while CPU is busy */ | 119 | /* Wait while CPU is busy */ |
121 | rdmsr(MSR_IA32_PERF_STATUS, lo, hi); | 120 | rdmsr(MSR_IA32_PERF_STATUS, lo, hi); |
@@ -162,7 +161,7 @@ postchange: | |||
162 | current_multiplier); | 161 | current_multiplier); |
163 | } | 162 | } |
164 | #endif | 163 | #endif |
165 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 164 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
166 | return err; | 165 | return err; |
167 | } | 166 | } |
168 | 167 | ||
@@ -190,7 +189,7 @@ static int eps_target(struct cpufreq_policy *policy, | |||
190 | 189 | ||
191 | /* Make frequency transition */ | 190 | /* Make frequency transition */ |
192 | dest_state = centaur->freq_table[newstate].index & 0xffff; | 191 | dest_state = centaur->freq_table[newstate].index & 0xffff; |
193 | ret = eps_set_state(centaur, cpu, dest_state); | 192 | ret = eps_set_state(centaur, policy, dest_state); |
194 | if (ret) | 193 | if (ret) |
195 | printk(KERN_ERR "eps: Timeout!\n"); | 194 | printk(KERN_ERR "eps: Timeout!\n"); |
196 | return ret; | 195 | return ret; |
diff --git a/drivers/cpufreq/elanfreq.c b/drivers/cpufreq/elanfreq.c index 960671fd3d7e..658d860344b0 100644 --- a/drivers/cpufreq/elanfreq.c +++ b/drivers/cpufreq/elanfreq.c | |||
@@ -117,15 +117,15 @@ static unsigned int elanfreq_get_cpu_frequency(unsigned int cpu) | |||
117 | * There is no return value. | 117 | * There is no return value. |
118 | */ | 118 | */ |
119 | 119 | ||
120 | static void elanfreq_set_cpu_state(unsigned int state) | 120 | static void elanfreq_set_cpu_state(struct cpufreq_policy *policy, |
121 | unsigned int state) | ||
121 | { | 122 | { |
122 | struct cpufreq_freqs freqs; | 123 | struct cpufreq_freqs freqs; |
123 | 124 | ||
124 | freqs.old = elanfreq_get_cpu_frequency(0); | 125 | freqs.old = elanfreq_get_cpu_frequency(0); |
125 | freqs.new = elan_multiplier[state].clock; | 126 | freqs.new = elan_multiplier[state].clock; |
126 | freqs.cpu = 0; /* elanfreq.c is UP only driver */ | ||
127 | 127 | ||
128 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 128 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
129 | 129 | ||
130 | printk(KERN_INFO "elanfreq: attempting to set frequency to %i kHz\n", | 130 | printk(KERN_INFO "elanfreq: attempting to set frequency to %i kHz\n", |
131 | elan_multiplier[state].clock); | 131 | elan_multiplier[state].clock); |
@@ -161,7 +161,7 @@ static void elanfreq_set_cpu_state(unsigned int state) | |||
161 | udelay(10000); | 161 | udelay(10000); |
162 | local_irq_enable(); | 162 | local_irq_enable(); |
163 | 163 | ||
164 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 164 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
165 | }; | 165 | }; |
166 | 166 | ||
167 | 167 | ||
@@ -188,7 +188,7 @@ static int elanfreq_target(struct cpufreq_policy *policy, | |||
188 | target_freq, relation, &newstate)) | 188 | target_freq, relation, &newstate)) |
189 | return -EINVAL; | 189 | return -EINVAL; |
190 | 190 | ||
191 | elanfreq_set_cpu_state(newstate); | 191 | elanfreq_set_cpu_state(policy, newstate); |
192 | 192 | ||
193 | return 0; | 193 | return 0; |
194 | } | 194 | } |
diff --git a/drivers/cpufreq/exynos-cpufreq.c b/drivers/cpufreq/exynos-cpufreq.c index 78057a357ddb..475b4f607f0d 100644 --- a/drivers/cpufreq/exynos-cpufreq.c +++ b/drivers/cpufreq/exynos-cpufreq.c | |||
@@ -70,7 +70,6 @@ static int exynos_cpufreq_scale(unsigned int target_freq) | |||
70 | 70 | ||
71 | freqs.old = policy->cur; | 71 | freqs.old = policy->cur; |
72 | freqs.new = target_freq; | 72 | freqs.new = target_freq; |
73 | freqs.cpu = policy->cpu; | ||
74 | 73 | ||
75 | if (freqs.new == freqs.old) | 74 | if (freqs.new == freqs.old) |
76 | goto out; | 75 | goto out; |
@@ -105,8 +104,7 @@ static int exynos_cpufreq_scale(unsigned int target_freq) | |||
105 | } | 104 | } |
106 | arm_volt = volt_table[index]; | 105 | arm_volt = volt_table[index]; |
107 | 106 | ||
108 | for_each_cpu(freqs.cpu, policy->cpus) | 107 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
109 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
110 | 108 | ||
111 | /* When the new frequency is higher than current frequency */ | 109 | /* When the new frequency is higher than current frequency */ |
112 | if ((freqs.new > freqs.old) && !safe_arm_volt) { | 110 | if ((freqs.new > freqs.old) && !safe_arm_volt) { |
@@ -131,8 +129,7 @@ static int exynos_cpufreq_scale(unsigned int target_freq) | |||
131 | 129 | ||
132 | exynos_info->set_freq(old_index, index); | 130 | exynos_info->set_freq(old_index, index); |
133 | 131 | ||
134 | for_each_cpu(freqs.cpu, policy->cpus) | 132 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
135 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
136 | 133 | ||
137 | /* When the new frequency is lower than current frequency */ | 134 | /* When the new frequency is lower than current frequency */ |
138 | if ((freqs.new < freqs.old) || | 135 | if ((freqs.new < freqs.old) || |
@@ -297,7 +294,7 @@ static int __init exynos_cpufreq_init(void) | |||
297 | else if (soc_is_exynos5250()) | 294 | else if (soc_is_exynos5250()) |
298 | ret = exynos5250_cpufreq_init(exynos_info); | 295 | ret = exynos5250_cpufreq_init(exynos_info); |
299 | else | 296 | else |
300 | pr_err("%s: CPU type not found\n", __func__); | 297 | return 0; |
301 | 298 | ||
302 | if (ret) | 299 | if (ret) |
303 | goto err_vdd_arm; | 300 | goto err_vdd_arm; |
diff --git a/drivers/cpufreq/exynos5440-cpufreq.c b/drivers/cpufreq/exynos5440-cpufreq.c new file mode 100644 index 000000000000..0c74018eda47 --- /dev/null +++ b/drivers/cpufreq/exynos5440-cpufreq.c | |||
@@ -0,0 +1,481 @@ | |||
1 | /* | ||
2 | * Copyright (c) 2013 Samsung Electronics Co., Ltd. | ||
3 | * http://www.samsung.com | ||
4 | * | ||
5 | * Amit Daniel Kachhap <amit.daniel@samsung.com> | ||
6 | * | ||
7 | * EXYNOS5440 - CPU frequency scaling support | ||
8 | * | ||
9 | * This program is free software; you can redistribute it and/or modify | ||
10 | * it under the terms of the GNU General Public License version 2 as | ||
11 | * published by the Free Software Foundation. | ||
12 | */ | ||
13 | |||
14 | #include <linux/clk.h> | ||
15 | #include <linux/cpu.h> | ||
16 | #include <linux/cpufreq.h> | ||
17 | #include <linux/err.h> | ||
18 | #include <linux/interrupt.h> | ||
19 | #include <linux/io.h> | ||
20 | #include <linux/module.h> | ||
21 | #include <linux/of_address.h> | ||
22 | #include <linux/of_irq.h> | ||
23 | #include <linux/opp.h> | ||
24 | #include <linux/platform_device.h> | ||
25 | #include <linux/slab.h> | ||
26 | |||
27 | /* Register definitions */ | ||
28 | #define XMU_DVFS_CTRL 0x0060 | ||
29 | #define XMU_PMU_P0_7 0x0064 | ||
30 | #define XMU_C0_3_PSTATE 0x0090 | ||
31 | #define XMU_P_LIMIT 0x00a0 | ||
32 | #define XMU_P_STATUS 0x00a4 | ||
33 | #define XMU_PMUEVTEN 0x00d0 | ||
34 | #define XMU_PMUIRQEN 0x00d4 | ||
35 | #define XMU_PMUIRQ 0x00d8 | ||
36 | |||
37 | /* PMU mask and shift definations */ | ||
38 | #define P_VALUE_MASK 0x7 | ||
39 | |||
40 | #define XMU_DVFS_CTRL_EN_SHIFT 0 | ||
41 | |||
42 | #define P0_7_CPUCLKDEV_SHIFT 21 | ||
43 | #define P0_7_CPUCLKDEV_MASK 0x7 | ||
44 | #define P0_7_ATBCLKDEV_SHIFT 18 | ||
45 | #define P0_7_ATBCLKDEV_MASK 0x7 | ||
46 | #define P0_7_CSCLKDEV_SHIFT 15 | ||
47 | #define P0_7_CSCLKDEV_MASK 0x7 | ||
48 | #define P0_7_CPUEMA_SHIFT 28 | ||
49 | #define P0_7_CPUEMA_MASK 0xf | ||
50 | #define P0_7_L2EMA_SHIFT 24 | ||
51 | #define P0_7_L2EMA_MASK 0xf | ||
52 | #define P0_7_VDD_SHIFT 8 | ||
53 | #define P0_7_VDD_MASK 0x7f | ||
54 | #define P0_7_FREQ_SHIFT 0 | ||
55 | #define P0_7_FREQ_MASK 0xff | ||
56 | |||
57 | #define C0_3_PSTATE_VALID_SHIFT 8 | ||
58 | #define C0_3_PSTATE_CURR_SHIFT 4 | ||
59 | #define C0_3_PSTATE_NEW_SHIFT 0 | ||
60 | |||
61 | #define PSTATE_CHANGED_EVTEN_SHIFT 0 | ||
62 | |||
63 | #define PSTATE_CHANGED_IRQEN_SHIFT 0 | ||
64 | |||
65 | #define PSTATE_CHANGED_SHIFT 0 | ||
66 | |||
67 | /* some constant values for clock divider calculation */ | ||
68 | #define CPU_DIV_FREQ_MAX 500 | ||
69 | #define CPU_DBG_FREQ_MAX 375 | ||
70 | #define CPU_ATB_FREQ_MAX 500 | ||
71 | |||
72 | #define PMIC_LOW_VOLT 0x30 | ||
73 | #define PMIC_HIGH_VOLT 0x28 | ||
74 | |||
75 | #define CPUEMA_HIGH 0x2 | ||
76 | #define CPUEMA_MID 0x4 | ||
77 | #define CPUEMA_LOW 0x7 | ||
78 | |||
79 | #define L2EMA_HIGH 0x1 | ||
80 | #define L2EMA_MID 0x3 | ||
81 | #define L2EMA_LOW 0x4 | ||
82 | |||
83 | #define DIV_TAB_MAX 2 | ||
84 | /* frequency unit is 20MHZ */ | ||
85 | #define FREQ_UNIT 20 | ||
86 | #define MAX_VOLTAGE 1550000 /* In microvolt */ | ||
87 | #define VOLTAGE_STEP 12500 /* In microvolt */ | ||
88 | |||
89 | #define CPUFREQ_NAME "exynos5440_dvfs" | ||
90 | #define DEF_TRANS_LATENCY 100000 | ||
91 | |||
92 | enum cpufreq_level_index { | ||
93 | L0, L1, L2, L3, L4, | ||
94 | L5, L6, L7, L8, L9, | ||
95 | }; | ||
96 | #define CPUFREQ_LEVEL_END (L7 + 1) | ||
97 | |||
98 | struct exynos_dvfs_data { | ||
99 | void __iomem *base; | ||
100 | struct resource *mem; | ||
101 | int irq; | ||
102 | struct clk *cpu_clk; | ||
103 | unsigned int cur_frequency; | ||
104 | unsigned int latency; | ||
105 | struct cpufreq_frequency_table *freq_table; | ||
106 | unsigned int freq_count; | ||
107 | struct device *dev; | ||
108 | bool dvfs_enabled; | ||
109 | struct work_struct irq_work; | ||
110 | }; | ||
111 | |||
112 | static struct exynos_dvfs_data *dvfs_info; | ||
113 | static DEFINE_MUTEX(cpufreq_lock); | ||
114 | static struct cpufreq_freqs freqs; | ||
115 | |||
116 | static int init_div_table(void) | ||
117 | { | ||
118 | struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table; | ||
119 | unsigned int tmp, clk_div, ema_div, freq, volt_id; | ||
120 | int i = 0; | ||
121 | struct opp *opp; | ||
122 | |||
123 | rcu_read_lock(); | ||
124 | for (i = 0; freq_tbl[i].frequency != CPUFREQ_TABLE_END; i++) { | ||
125 | |||
126 | opp = opp_find_freq_exact(dvfs_info->dev, | ||
127 | freq_tbl[i].frequency * 1000, true); | ||
128 | if (IS_ERR(opp)) { | ||
129 | rcu_read_unlock(); | ||
130 | dev_err(dvfs_info->dev, | ||
131 | "failed to find valid OPP for %u KHZ\n", | ||
132 | freq_tbl[i].frequency); | ||
133 | return PTR_ERR(opp); | ||
134 | } | ||
135 | |||
136 | freq = freq_tbl[i].frequency / 1000; /* In MHZ */ | ||
137 | clk_div = ((freq / CPU_DIV_FREQ_MAX) & P0_7_CPUCLKDEV_MASK) | ||
138 | << P0_7_CPUCLKDEV_SHIFT; | ||
139 | clk_div |= ((freq / CPU_ATB_FREQ_MAX) & P0_7_ATBCLKDEV_MASK) | ||
140 | << P0_7_ATBCLKDEV_SHIFT; | ||
141 | clk_div |= ((freq / CPU_DBG_FREQ_MAX) & P0_7_CSCLKDEV_MASK) | ||
142 | << P0_7_CSCLKDEV_SHIFT; | ||
143 | |||
144 | /* Calculate EMA */ | ||
145 | volt_id = opp_get_voltage(opp); | ||
146 | volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP; | ||
147 | if (volt_id < PMIC_HIGH_VOLT) { | ||
148 | ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) | | ||
149 | (L2EMA_HIGH << P0_7_L2EMA_SHIFT); | ||
150 | } else if (volt_id > PMIC_LOW_VOLT) { | ||
151 | ema_div = (CPUEMA_LOW << P0_7_CPUEMA_SHIFT) | | ||
152 | (L2EMA_LOW << P0_7_L2EMA_SHIFT); | ||
153 | } else { | ||
154 | ema_div = (CPUEMA_MID << P0_7_CPUEMA_SHIFT) | | ||
155 | (L2EMA_MID << P0_7_L2EMA_SHIFT); | ||
156 | } | ||
157 | |||
158 | tmp = (clk_div | ema_div | (volt_id << P0_7_VDD_SHIFT) | ||
159 | | ((freq / FREQ_UNIT) << P0_7_FREQ_SHIFT)); | ||
160 | |||
161 | __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * i); | ||
162 | } | ||
163 | |||
164 | rcu_read_unlock(); | ||
165 | return 0; | ||
166 | } | ||
167 | |||
168 | static void exynos_enable_dvfs(void) | ||
169 | { | ||
170 | unsigned int tmp, i, cpu; | ||
171 | struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; | ||
172 | /* Disable DVFS */ | ||
173 | __raw_writel(0, dvfs_info->base + XMU_DVFS_CTRL); | ||
174 | |||
175 | /* Enable PSTATE Change Event */ | ||
176 | tmp = __raw_readl(dvfs_info->base + XMU_PMUEVTEN); | ||
177 | tmp |= (1 << PSTATE_CHANGED_EVTEN_SHIFT); | ||
178 | __raw_writel(tmp, dvfs_info->base + XMU_PMUEVTEN); | ||
179 | |||
180 | /* Enable PSTATE Change IRQ */ | ||
181 | tmp = __raw_readl(dvfs_info->base + XMU_PMUIRQEN); | ||
182 | tmp |= (1 << PSTATE_CHANGED_IRQEN_SHIFT); | ||
183 | __raw_writel(tmp, dvfs_info->base + XMU_PMUIRQEN); | ||
184 | |||
185 | /* Set initial performance index */ | ||
186 | for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) | ||
187 | if (freq_table[i].frequency == dvfs_info->cur_frequency) | ||
188 | break; | ||
189 | |||
190 | if (freq_table[i].frequency == CPUFREQ_TABLE_END) { | ||
191 | dev_crit(dvfs_info->dev, "Boot up frequency not supported\n"); | ||
192 | /* Assign the highest frequency */ | ||
193 | i = 0; | ||
194 | dvfs_info->cur_frequency = freq_table[i].frequency; | ||
195 | } | ||
196 | |||
197 | dev_info(dvfs_info->dev, "Setting dvfs initial frequency = %uKHZ", | ||
198 | dvfs_info->cur_frequency); | ||
199 | |||
200 | for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) { | ||
201 | tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); | ||
202 | tmp &= ~(P_VALUE_MASK << C0_3_PSTATE_NEW_SHIFT); | ||
203 | tmp |= (i << C0_3_PSTATE_NEW_SHIFT); | ||
204 | __raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); | ||
205 | } | ||
206 | |||
207 | /* Enable DVFS */ | ||
208 | __raw_writel(1 << XMU_DVFS_CTRL_EN_SHIFT, | ||
209 | dvfs_info->base + XMU_DVFS_CTRL); | ||
210 | } | ||
211 | |||
212 | static int exynos_verify_speed(struct cpufreq_policy *policy) | ||
213 | { | ||
214 | return cpufreq_frequency_table_verify(policy, | ||
215 | dvfs_info->freq_table); | ||
216 | } | ||
217 | |||
218 | static unsigned int exynos_getspeed(unsigned int cpu) | ||
219 | { | ||
220 | return dvfs_info->cur_frequency; | ||
221 | } | ||
222 | |||
223 | static int exynos_target(struct cpufreq_policy *policy, | ||
224 | unsigned int target_freq, | ||
225 | unsigned int relation) | ||
226 | { | ||
227 | unsigned int index, tmp; | ||
228 | int ret = 0, i; | ||
229 | struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; | ||
230 | |||
231 | mutex_lock(&cpufreq_lock); | ||
232 | |||
233 | ret = cpufreq_frequency_table_target(policy, freq_table, | ||
234 | target_freq, relation, &index); | ||
235 | if (ret) | ||
236 | goto out; | ||
237 | |||
238 | freqs.old = dvfs_info->cur_frequency; | ||
239 | freqs.new = freq_table[index].frequency; | ||
240 | |||
241 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); | ||
242 | |||
243 | /* Set the target frequency in all C0_3_PSTATE register */ | ||
244 | for_each_cpu(i, policy->cpus) { | ||
245 | tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + i * 4); | ||
246 | tmp &= ~(P_VALUE_MASK << C0_3_PSTATE_NEW_SHIFT); | ||
247 | tmp |= (index << C0_3_PSTATE_NEW_SHIFT); | ||
248 | |||
249 | __raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + i * 4); | ||
250 | } | ||
251 | out: | ||
252 | mutex_unlock(&cpufreq_lock); | ||
253 | return ret; | ||
254 | } | ||
255 | |||
256 | static void exynos_cpufreq_work(struct work_struct *work) | ||
257 | { | ||
258 | unsigned int cur_pstate, index; | ||
259 | struct cpufreq_policy *policy = cpufreq_cpu_get(0); /* boot CPU */ | ||
260 | struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; | ||
261 | |||
262 | /* Ensure we can access cpufreq structures */ | ||
263 | if (unlikely(dvfs_info->dvfs_enabled == false)) | ||
264 | goto skip_work; | ||
265 | |||
266 | mutex_lock(&cpufreq_lock); | ||
267 | freqs.old = dvfs_info->cur_frequency; | ||
268 | |||
269 | cur_pstate = __raw_readl(dvfs_info->base + XMU_P_STATUS); | ||
270 | if (cur_pstate >> C0_3_PSTATE_VALID_SHIFT & 0x1) | ||
271 | index = (cur_pstate >> C0_3_PSTATE_CURR_SHIFT) & P_VALUE_MASK; | ||
272 | else | ||
273 | index = (cur_pstate >> C0_3_PSTATE_NEW_SHIFT) & P_VALUE_MASK; | ||
274 | |||
275 | if (likely(index < dvfs_info->freq_count)) { | ||
276 | freqs.new = freq_table[index].frequency; | ||
277 | dvfs_info->cur_frequency = freqs.new; | ||
278 | } else { | ||
279 | dev_crit(dvfs_info->dev, "New frequency out of range\n"); | ||
280 | freqs.new = dvfs_info->cur_frequency; | ||
281 | } | ||
282 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); | ||
283 | |||
284 | cpufreq_cpu_put(policy); | ||
285 | mutex_unlock(&cpufreq_lock); | ||
286 | skip_work: | ||
287 | enable_irq(dvfs_info->irq); | ||
288 | } | ||
289 | |||
290 | static irqreturn_t exynos_cpufreq_irq(int irq, void *id) | ||
291 | { | ||
292 | unsigned int tmp; | ||
293 | |||
294 | tmp = __raw_readl(dvfs_info->base + XMU_PMUIRQ); | ||
295 | if (tmp >> PSTATE_CHANGED_SHIFT & 0x1) { | ||
296 | __raw_writel(tmp, dvfs_info->base + XMU_PMUIRQ); | ||
297 | disable_irq_nosync(irq); | ||
298 | schedule_work(&dvfs_info->irq_work); | ||
299 | } | ||
300 | return IRQ_HANDLED; | ||
301 | } | ||
302 | |||
303 | static void exynos_sort_descend_freq_table(void) | ||
304 | { | ||
305 | struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table; | ||
306 | int i = 0, index; | ||
307 | unsigned int tmp_freq; | ||
308 | /* | ||
309 | * Exynos5440 clock controller state logic expects the cpufreq table to | ||
310 | * be in descending order. But the OPP library constructs the table in | ||
311 | * ascending order. So to make the table descending we just need to | ||
312 | * swap the i element with the N - i element. | ||
313 | */ | ||
314 | for (i = 0; i < dvfs_info->freq_count / 2; i++) { | ||
315 | index = dvfs_info->freq_count - i - 1; | ||
316 | tmp_freq = freq_tbl[i].frequency; | ||
317 | freq_tbl[i].frequency = freq_tbl[index].frequency; | ||
318 | freq_tbl[index].frequency = tmp_freq; | ||
319 | } | ||
320 | } | ||
321 | |||
322 | static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy) | ||
323 | { | ||
324 | int ret; | ||
325 | |||
326 | ret = cpufreq_frequency_table_cpuinfo(policy, dvfs_info->freq_table); | ||
327 | if (ret) { | ||
328 | dev_err(dvfs_info->dev, "Invalid frequency table: %d\n", ret); | ||
329 | return ret; | ||
330 | } | ||
331 | |||
332 | policy->cur = dvfs_info->cur_frequency; | ||
333 | policy->cpuinfo.transition_latency = dvfs_info->latency; | ||
334 | cpumask_setall(policy->cpus); | ||
335 | |||
336 | cpufreq_frequency_table_get_attr(dvfs_info->freq_table, policy->cpu); | ||
337 | |||
338 | return 0; | ||
339 | } | ||
340 | |||
341 | static struct cpufreq_driver exynos_driver = { | ||
342 | .flags = CPUFREQ_STICKY, | ||
343 | .verify = exynos_verify_speed, | ||
344 | .target = exynos_target, | ||
345 | .get = exynos_getspeed, | ||
346 | .init = exynos_cpufreq_cpu_init, | ||
347 | .name = CPUFREQ_NAME, | ||
348 | }; | ||
349 | |||
350 | static const struct of_device_id exynos_cpufreq_match[] = { | ||
351 | { | ||
352 | .compatible = "samsung,exynos5440-cpufreq", | ||
353 | }, | ||
354 | {}, | ||
355 | }; | ||
356 | MODULE_DEVICE_TABLE(of, exynos_cpufreq_match); | ||
357 | |||
358 | static int exynos_cpufreq_probe(struct platform_device *pdev) | ||
359 | { | ||
360 | int ret = -EINVAL; | ||
361 | struct device_node *np; | ||
362 | struct resource res; | ||
363 | |||
364 | np = pdev->dev.of_node; | ||
365 | if (!np) | ||
366 | return -ENODEV; | ||
367 | |||
368 | dvfs_info = devm_kzalloc(&pdev->dev, sizeof(*dvfs_info), GFP_KERNEL); | ||
369 | if (!dvfs_info) { | ||
370 | ret = -ENOMEM; | ||
371 | goto err_put_node; | ||
372 | } | ||
373 | |||
374 | dvfs_info->dev = &pdev->dev; | ||
375 | |||
376 | ret = of_address_to_resource(np, 0, &res); | ||
377 | if (ret) | ||
378 | goto err_put_node; | ||
379 | |||
380 | dvfs_info->base = devm_ioremap_resource(dvfs_info->dev, &res); | ||
381 | if (IS_ERR(dvfs_info->base)) { | ||
382 | ret = PTR_ERR(dvfs_info->base); | ||
383 | goto err_put_node; | ||
384 | } | ||
385 | |||
386 | dvfs_info->irq = irq_of_parse_and_map(np, 0); | ||
387 | if (!dvfs_info->irq) { | ||
388 | dev_err(dvfs_info->dev, "No cpufreq irq found\n"); | ||
389 | ret = -ENODEV; | ||
390 | goto err_put_node; | ||
391 | } | ||
392 | |||
393 | ret = of_init_opp_table(dvfs_info->dev); | ||
394 | if (ret) { | ||
395 | dev_err(dvfs_info->dev, "failed to init OPP table: %d\n", ret); | ||
396 | goto err_put_node; | ||
397 | } | ||
398 | |||
399 | ret = opp_init_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); | ||
400 | if (ret) { | ||
401 | dev_err(dvfs_info->dev, | ||
402 | "failed to init cpufreq table: %d\n", ret); | ||
403 | goto err_put_node; | ||
404 | } | ||
405 | dvfs_info->freq_count = opp_get_opp_count(dvfs_info->dev); | ||
406 | exynos_sort_descend_freq_table(); | ||
407 | |||
408 | if (of_property_read_u32(np, "clock-latency", &dvfs_info->latency)) | ||
409 | dvfs_info->latency = DEF_TRANS_LATENCY; | ||
410 | |||
411 | dvfs_info->cpu_clk = devm_clk_get(dvfs_info->dev, "armclk"); | ||
412 | if (IS_ERR(dvfs_info->cpu_clk)) { | ||
413 | dev_err(dvfs_info->dev, "Failed to get cpu clock\n"); | ||
414 | ret = PTR_ERR(dvfs_info->cpu_clk); | ||
415 | goto err_free_table; | ||
416 | } | ||
417 | |||
418 | dvfs_info->cur_frequency = clk_get_rate(dvfs_info->cpu_clk); | ||
419 | if (!dvfs_info->cur_frequency) { | ||
420 | dev_err(dvfs_info->dev, "Failed to get clock rate\n"); | ||
421 | ret = -EINVAL; | ||
422 | goto err_free_table; | ||
423 | } | ||
424 | dvfs_info->cur_frequency /= 1000; | ||
425 | |||
426 | INIT_WORK(&dvfs_info->irq_work, exynos_cpufreq_work); | ||
427 | ret = devm_request_irq(dvfs_info->dev, dvfs_info->irq, | ||
428 | exynos_cpufreq_irq, IRQF_TRIGGER_NONE, | ||
429 | CPUFREQ_NAME, dvfs_info); | ||
430 | if (ret) { | ||
431 | dev_err(dvfs_info->dev, "Failed to register IRQ\n"); | ||
432 | goto err_free_table; | ||
433 | } | ||
434 | |||
435 | ret = init_div_table(); | ||
436 | if (ret) { | ||
437 | dev_err(dvfs_info->dev, "Failed to initialise div table\n"); | ||
438 | goto err_free_table; | ||
439 | } | ||
440 | |||
441 | exynos_enable_dvfs(); | ||
442 | ret = cpufreq_register_driver(&exynos_driver); | ||
443 | if (ret) { | ||
444 | dev_err(dvfs_info->dev, | ||
445 | "%s: failed to register cpufreq driver\n", __func__); | ||
446 | goto err_free_table; | ||
447 | } | ||
448 | |||
449 | of_node_put(np); | ||
450 | dvfs_info->dvfs_enabled = true; | ||
451 | return 0; | ||
452 | |||
453 | err_free_table: | ||
454 | opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); | ||
455 | err_put_node: | ||
456 | of_node_put(np); | ||
457 | dev_err(dvfs_info->dev, "%s: failed initialization\n", __func__); | ||
458 | return ret; | ||
459 | } | ||
460 | |||
461 | static int exynos_cpufreq_remove(struct platform_device *pdev) | ||
462 | { | ||
463 | cpufreq_unregister_driver(&exynos_driver); | ||
464 | opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); | ||
465 | return 0; | ||
466 | } | ||
467 | |||
468 | static struct platform_driver exynos_cpufreq_platdrv = { | ||
469 | .driver = { | ||
470 | .name = "exynos5440-cpufreq", | ||
471 | .owner = THIS_MODULE, | ||
472 | .of_match_table = exynos_cpufreq_match, | ||
473 | }, | ||
474 | .probe = exynos_cpufreq_probe, | ||
475 | .remove = exynos_cpufreq_remove, | ||
476 | }; | ||
477 | module_platform_driver(exynos_cpufreq_platdrv); | ||
478 | |||
479 | MODULE_AUTHOR("Amit Daniel Kachhap <amit.daniel@samsung.com>"); | ||
480 | MODULE_DESCRIPTION("Exynos5440 cpufreq driver"); | ||
481 | MODULE_LICENSE("GPL"); | ||
diff --git a/drivers/cpufreq/gx-suspmod.c b/drivers/cpufreq/gx-suspmod.c index 456bee058fe6..3dfc99b9ca86 100644 --- a/drivers/cpufreq/gx-suspmod.c +++ b/drivers/cpufreq/gx-suspmod.c | |||
@@ -251,14 +251,13 @@ static unsigned int gx_validate_speed(unsigned int khz, u8 *on_duration, | |||
251 | * set cpu speed in khz. | 251 | * set cpu speed in khz. |
252 | **/ | 252 | **/ |
253 | 253 | ||
254 | static void gx_set_cpuspeed(unsigned int khz) | 254 | static void gx_set_cpuspeed(struct cpufreq_policy *policy, unsigned int khz) |
255 | { | 255 | { |
256 | u8 suscfg, pmer1; | 256 | u8 suscfg, pmer1; |
257 | unsigned int new_khz; | 257 | unsigned int new_khz; |
258 | unsigned long flags; | 258 | unsigned long flags; |
259 | struct cpufreq_freqs freqs; | 259 | struct cpufreq_freqs freqs; |
260 | 260 | ||
261 | freqs.cpu = 0; | ||
262 | freqs.old = gx_get_cpuspeed(0); | 261 | freqs.old = gx_get_cpuspeed(0); |
263 | 262 | ||
264 | new_khz = gx_validate_speed(khz, &gx_params->on_duration, | 263 | new_khz = gx_validate_speed(khz, &gx_params->on_duration, |
@@ -266,11 +265,9 @@ static void gx_set_cpuspeed(unsigned int khz) | |||
266 | 265 | ||
267 | freqs.new = new_khz; | 266 | freqs.new = new_khz; |
268 | 267 | ||
269 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 268 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
270 | local_irq_save(flags); | 269 | local_irq_save(flags); |
271 | 270 | ||
272 | |||
273 | |||
274 | if (new_khz != stock_freq) { | 271 | if (new_khz != stock_freq) { |
275 | /* if new khz == 100% of CPU speed, it is special case */ | 272 | /* if new khz == 100% of CPU speed, it is special case */ |
276 | switch (gx_params->cs55x0->device) { | 273 | switch (gx_params->cs55x0->device) { |
@@ -317,7 +314,7 @@ static void gx_set_cpuspeed(unsigned int khz) | |||
317 | 314 | ||
318 | gx_params->pci_suscfg = suscfg; | 315 | gx_params->pci_suscfg = suscfg; |
319 | 316 | ||
320 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 317 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
321 | 318 | ||
322 | pr_debug("suspend modulation w/ duration of ON:%d us, OFF:%d us\n", | 319 | pr_debug("suspend modulation w/ duration of ON:%d us, OFF:%d us\n", |
323 | gx_params->on_duration * 32, gx_params->off_duration * 32); | 320 | gx_params->on_duration * 32, gx_params->off_duration * 32); |
@@ -397,7 +394,7 @@ static int cpufreq_gx_target(struct cpufreq_policy *policy, | |||
397 | tmp_freq = gx_validate_speed(tmp_freq, &tmp1, &tmp2); | 394 | tmp_freq = gx_validate_speed(tmp_freq, &tmp1, &tmp2); |
398 | } | 395 | } |
399 | 396 | ||
400 | gx_set_cpuspeed(tmp_freq); | 397 | gx_set_cpuspeed(policy, tmp_freq); |
401 | 398 | ||
402 | return 0; | 399 | return 0; |
403 | } | 400 | } |
diff --git a/arch/ia64/kernel/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/ia64-acpi-cpufreq.c index f09b174244d5..c0075dbaa633 100644 --- a/arch/ia64/kernel/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/ia64-acpi-cpufreq.c | |||
@@ -1,5 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * arch/ia64/kernel/cpufreq/acpi-cpufreq.c | ||
3 | * This file provides the ACPI based P-state support. This | 2 | * This file provides the ACPI based P-state support. This |
4 | * module works with generic cpufreq infrastructure. Most of | 3 | * module works with generic cpufreq infrastructure. Most of |
5 | * the code is based on i386 version | 4 | * the code is based on i386 version |
@@ -137,7 +136,7 @@ migrate_end: | |||
137 | static int | 136 | static int |
138 | processor_set_freq ( | 137 | processor_set_freq ( |
139 | struct cpufreq_acpi_io *data, | 138 | struct cpufreq_acpi_io *data, |
140 | unsigned int cpu, | 139 | struct cpufreq_policy *policy, |
141 | int state) | 140 | int state) |
142 | { | 141 | { |
143 | int ret = 0; | 142 | int ret = 0; |
@@ -149,8 +148,8 @@ processor_set_freq ( | |||
149 | pr_debug("processor_set_freq\n"); | 148 | pr_debug("processor_set_freq\n"); |
150 | 149 | ||
151 | saved_mask = current->cpus_allowed; | 150 | saved_mask = current->cpus_allowed; |
152 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 151 | set_cpus_allowed_ptr(current, cpumask_of(policy->cpu)); |
153 | if (smp_processor_id() != cpu) { | 152 | if (smp_processor_id() != policy->cpu) { |
154 | retval = -EAGAIN; | 153 | retval = -EAGAIN; |
155 | goto migrate_end; | 154 | goto migrate_end; |
156 | } | 155 | } |
@@ -170,12 +169,11 @@ processor_set_freq ( | |||
170 | data->acpi_data.state, state); | 169 | data->acpi_data.state, state); |
171 | 170 | ||
172 | /* cpufreq frequency struct */ | 171 | /* cpufreq frequency struct */ |
173 | cpufreq_freqs.cpu = cpu; | ||
174 | cpufreq_freqs.old = data->freq_table[data->acpi_data.state].frequency; | 172 | cpufreq_freqs.old = data->freq_table[data->acpi_data.state].frequency; |
175 | cpufreq_freqs.new = data->freq_table[state].frequency; | 173 | cpufreq_freqs.new = data->freq_table[state].frequency; |
176 | 174 | ||
177 | /* notify cpufreq */ | 175 | /* notify cpufreq */ |
178 | cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_PRECHANGE); | 176 | cpufreq_notify_transition(policy, &cpufreq_freqs, CPUFREQ_PRECHANGE); |
179 | 177 | ||
180 | /* | 178 | /* |
181 | * First we write the target state's 'control' value to the | 179 | * First we write the target state's 'control' value to the |
@@ -189,17 +187,20 @@ processor_set_freq ( | |||
189 | ret = processor_set_pstate(value); | 187 | ret = processor_set_pstate(value); |
190 | if (ret) { | 188 | if (ret) { |
191 | unsigned int tmp = cpufreq_freqs.new; | 189 | unsigned int tmp = cpufreq_freqs.new; |
192 | cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_POSTCHANGE); | 190 | cpufreq_notify_transition(policy, &cpufreq_freqs, |
191 | CPUFREQ_POSTCHANGE); | ||
193 | cpufreq_freqs.new = cpufreq_freqs.old; | 192 | cpufreq_freqs.new = cpufreq_freqs.old; |
194 | cpufreq_freqs.old = tmp; | 193 | cpufreq_freqs.old = tmp; |
195 | cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_PRECHANGE); | 194 | cpufreq_notify_transition(policy, &cpufreq_freqs, |
196 | cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_POSTCHANGE); | 195 | CPUFREQ_PRECHANGE); |
196 | cpufreq_notify_transition(policy, &cpufreq_freqs, | ||
197 | CPUFREQ_POSTCHANGE); | ||
197 | printk(KERN_WARNING "Transition failed with error %d\n", ret); | 198 | printk(KERN_WARNING "Transition failed with error %d\n", ret); |
198 | retval = -ENODEV; | 199 | retval = -ENODEV; |
199 | goto migrate_end; | 200 | goto migrate_end; |
200 | } | 201 | } |
201 | 202 | ||
202 | cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_POSTCHANGE); | 203 | cpufreq_notify_transition(policy, &cpufreq_freqs, CPUFREQ_POSTCHANGE); |
203 | 204 | ||
204 | data->acpi_data.state = state; | 205 | data->acpi_data.state = state; |
205 | 206 | ||
@@ -240,7 +241,7 @@ acpi_cpufreq_target ( | |||
240 | if (result) | 241 | if (result) |
241 | return (result); | 242 | return (result); |
242 | 243 | ||
243 | result = processor_set_freq(data, policy->cpu, next_state); | 244 | result = processor_set_freq(data, policy, next_state); |
244 | 245 | ||
245 | return (result); | 246 | return (result); |
246 | } | 247 | } |
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c index 54e336de373b..b78bc35973ba 100644 --- a/drivers/cpufreq/imx6q-cpufreq.c +++ b/drivers/cpufreq/imx6q-cpufreq.c | |||
@@ -50,7 +50,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, | |||
50 | struct cpufreq_freqs freqs; | 50 | struct cpufreq_freqs freqs; |
51 | struct opp *opp; | 51 | struct opp *opp; |
52 | unsigned long freq_hz, volt, volt_old; | 52 | unsigned long freq_hz, volt, volt_old; |
53 | unsigned int index, cpu; | 53 | unsigned int index; |
54 | int ret; | 54 | int ret; |
55 | 55 | ||
56 | ret = cpufreq_frequency_table_target(policy, freq_table, target_freq, | 56 | ret = cpufreq_frequency_table_target(policy, freq_table, target_freq, |
@@ -68,10 +68,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, | |||
68 | if (freqs.old == freqs.new) | 68 | if (freqs.old == freqs.new) |
69 | return 0; | 69 | return 0; |
70 | 70 | ||
71 | for_each_online_cpu(cpu) { | 71 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
72 | freqs.cpu = cpu; | ||
73 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
74 | } | ||
75 | 72 | ||
76 | rcu_read_lock(); | 73 | rcu_read_lock(); |
77 | opp = opp_find_freq_ceil(cpu_dev, &freq_hz); | 74 | opp = opp_find_freq_ceil(cpu_dev, &freq_hz); |
@@ -166,10 +163,7 @@ static int imx6q_set_target(struct cpufreq_policy *policy, | |||
166 | } | 163 | } |
167 | } | 164 | } |
168 | 165 | ||
169 | for_each_online_cpu(cpu) { | 166 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
170 | freqs.cpu = cpu; | ||
171 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
172 | } | ||
173 | 167 | ||
174 | return 0; | 168 | return 0; |
175 | } | 169 | } |
diff --git a/arch/arm/mach-integrator/cpu.c b/drivers/cpufreq/integrator-cpufreq.c index 590c192cdf4d..f7c99df0880b 100644 --- a/arch/arm/mach-integrator/cpu.c +++ b/drivers/cpufreq/integrator-cpufreq.c | |||
@@ -1,6 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * linux/arch/arm/mach-integrator/cpu.c | ||
3 | * | ||
4 | * Copyright (C) 2001-2002 Deep Blue Solutions Ltd. | 2 | * Copyright (C) 2001-2002 Deep Blue Solutions Ltd. |
5 | * | 3 | * |
6 | * This program is free software; you can redistribute it and/or modify | 4 | * This program is free software; you can redistribute it and/or modify |
@@ -123,14 +121,12 @@ static int integrator_set_target(struct cpufreq_policy *policy, | |||
123 | vco = icst_hz_to_vco(&cclk_params, target_freq * 1000); | 121 | vco = icst_hz_to_vco(&cclk_params, target_freq * 1000); |
124 | freqs.new = icst_hz(&cclk_params, vco) / 1000; | 122 | freqs.new = icst_hz(&cclk_params, vco) / 1000; |
125 | 123 | ||
126 | freqs.cpu = policy->cpu; | ||
127 | |||
128 | if (freqs.old == freqs.new) { | 124 | if (freqs.old == freqs.new) { |
129 | set_cpus_allowed(current, cpus_allowed); | 125 | set_cpus_allowed(current, cpus_allowed); |
130 | return 0; | 126 | return 0; |
131 | } | 127 | } |
132 | 128 | ||
133 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 129 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
134 | 130 | ||
135 | cm_osc = __raw_readl(CM_OSC); | 131 | cm_osc = __raw_readl(CM_OSC); |
136 | 132 | ||
@@ -151,7 +147,7 @@ static int integrator_set_target(struct cpufreq_policy *policy, | |||
151 | */ | 147 | */ |
152 | set_cpus_allowed(current, cpus_allowed); | 148 | set_cpus_allowed(current, cpus_allowed); |
153 | 149 | ||
154 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 150 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
155 | 151 | ||
156 | return 0; | 152 | return 0; |
157 | } | 153 | } |
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 6133ef5cf671..cc3a8e6c92be 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c | |||
@@ -1,5 +1,5 @@ | |||
1 | /* | 1 | /* |
2 | * cpufreq_snb.c: Native P state management for Intel processors | 2 | * intel_pstate.c: Native P state management for Intel processors |
3 | * | 3 | * |
4 | * (C) Copyright 2012 Intel Corporation | 4 | * (C) Copyright 2012 Intel Corporation |
5 | * Author: Dirk Brandewie <dirk.j.brandewie@intel.com> | 5 | * Author: Dirk Brandewie <dirk.j.brandewie@intel.com> |
@@ -657,30 +657,27 @@ static unsigned int intel_pstate_get(unsigned int cpu_num) | |||
657 | static int intel_pstate_set_policy(struct cpufreq_policy *policy) | 657 | static int intel_pstate_set_policy(struct cpufreq_policy *policy) |
658 | { | 658 | { |
659 | struct cpudata *cpu; | 659 | struct cpudata *cpu; |
660 | int min, max; | ||
661 | 660 | ||
662 | cpu = all_cpu_data[policy->cpu]; | 661 | cpu = all_cpu_data[policy->cpu]; |
663 | 662 | ||
664 | if (!policy->cpuinfo.max_freq) | 663 | if (!policy->cpuinfo.max_freq) |
665 | return -ENODEV; | 664 | return -ENODEV; |
666 | 665 | ||
667 | intel_pstate_get_min_max(cpu, &min, &max); | ||
668 | |||
669 | limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; | ||
670 | limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100); | ||
671 | limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); | ||
672 | |||
673 | limits.max_perf_pct = policy->max * 100 / policy->cpuinfo.max_freq; | ||
674 | limits.max_perf_pct = clamp_t(int, limits.max_perf_pct, 0 , 100); | ||
675 | limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100)); | ||
676 | |||
677 | if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { | 666 | if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { |
678 | limits.min_perf_pct = 100; | 667 | limits.min_perf_pct = 100; |
679 | limits.min_perf = int_tofp(1); | 668 | limits.min_perf = int_tofp(1); |
680 | limits.max_perf_pct = 100; | 669 | limits.max_perf_pct = 100; |
681 | limits.max_perf = int_tofp(1); | 670 | limits.max_perf = int_tofp(1); |
682 | limits.no_turbo = 0; | 671 | limits.no_turbo = 0; |
672 | return 0; | ||
683 | } | 673 | } |
674 | limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; | ||
675 | limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100); | ||
676 | limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); | ||
677 | |||
678 | limits.max_perf_pct = policy->max * 100 / policy->cpuinfo.max_freq; | ||
679 | limits.max_perf_pct = clamp_t(int, limits.max_perf_pct, 0 , 100); | ||
680 | limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100)); | ||
684 | 681 | ||
685 | return 0; | 682 | return 0; |
686 | } | 683 | } |
diff --git a/drivers/cpufreq/kirkwood-cpufreq.c b/drivers/cpufreq/kirkwood-cpufreq.c index 0e83e3c24f5b..d36ea8dc96eb 100644 --- a/drivers/cpufreq/kirkwood-cpufreq.c +++ b/drivers/cpufreq/kirkwood-cpufreq.c | |||
@@ -55,7 +55,8 @@ static unsigned int kirkwood_cpufreq_get_cpu_frequency(unsigned int cpu) | |||
55 | return kirkwood_freq_table[0].frequency; | 55 | return kirkwood_freq_table[0].frequency; |
56 | } | 56 | } |
57 | 57 | ||
58 | static void kirkwood_cpufreq_set_cpu_state(unsigned int index) | 58 | static void kirkwood_cpufreq_set_cpu_state(struct cpufreq_policy *policy, |
59 | unsigned int index) | ||
59 | { | 60 | { |
60 | struct cpufreq_freqs freqs; | 61 | struct cpufreq_freqs freqs; |
61 | unsigned int state = kirkwood_freq_table[index].index; | 62 | unsigned int state = kirkwood_freq_table[index].index; |
@@ -63,9 +64,8 @@ static void kirkwood_cpufreq_set_cpu_state(unsigned int index) | |||
63 | 64 | ||
64 | freqs.old = kirkwood_cpufreq_get_cpu_frequency(0); | 65 | freqs.old = kirkwood_cpufreq_get_cpu_frequency(0); |
65 | freqs.new = kirkwood_freq_table[index].frequency; | 66 | freqs.new = kirkwood_freq_table[index].frequency; |
66 | freqs.cpu = 0; /* Kirkwood is UP */ | ||
67 | 67 | ||
68 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 68 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
69 | 69 | ||
70 | dev_dbg(priv.dev, "Attempting to set frequency to %i KHz\n", | 70 | dev_dbg(priv.dev, "Attempting to set frequency to %i KHz\n", |
71 | kirkwood_freq_table[index].frequency); | 71 | kirkwood_freq_table[index].frequency); |
@@ -99,7 +99,7 @@ static void kirkwood_cpufreq_set_cpu_state(unsigned int index) | |||
99 | 99 | ||
100 | local_irq_enable(); | 100 | local_irq_enable(); |
101 | } | 101 | } |
102 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 102 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
103 | }; | 103 | }; |
104 | 104 | ||
105 | static int kirkwood_cpufreq_verify(struct cpufreq_policy *policy) | 105 | static int kirkwood_cpufreq_verify(struct cpufreq_policy *policy) |
@@ -117,7 +117,7 @@ static int kirkwood_cpufreq_target(struct cpufreq_policy *policy, | |||
117 | target_freq, relation, &index)) | 117 | target_freq, relation, &index)) |
118 | return -EINVAL; | 118 | return -EINVAL; |
119 | 119 | ||
120 | kirkwood_cpufreq_set_cpu_state(index); | 120 | kirkwood_cpufreq_set_cpu_state(policy, index); |
121 | 121 | ||
122 | return 0; | 122 | return 0; |
123 | } | 123 | } |
@@ -175,11 +175,9 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev) | |||
175 | dev_err(&pdev->dev, "Cannot get memory resource\n"); | 175 | dev_err(&pdev->dev, "Cannot get memory resource\n"); |
176 | return -ENODEV; | 176 | return -ENODEV; |
177 | } | 177 | } |
178 | priv.base = devm_request_and_ioremap(&pdev->dev, res); | 178 | priv.base = devm_ioremap_resource(&pdev->dev, res); |
179 | if (!priv.base) { | 179 | if (IS_ERR(priv.base)) |
180 | dev_err(&pdev->dev, "Cannot ioremap\n"); | 180 | return PTR_ERR(priv.base); |
181 | return -EADDRNOTAVAIL; | ||
182 | } | ||
183 | 181 | ||
184 | np = of_find_node_by_path("/cpus/cpu@0"); | 182 | np = of_find_node_by_path("/cpus/cpu@0"); |
185 | if (!np) | 183 | if (!np) |
diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c index 1180d536d1eb..b448638e34de 100644 --- a/drivers/cpufreq/longhaul.c +++ b/drivers/cpufreq/longhaul.c | |||
@@ -242,7 +242,8 @@ static void do_powersaver(int cx_address, unsigned int mults_index, | |||
242 | * Sets a new clock ratio. | 242 | * Sets a new clock ratio. |
243 | */ | 243 | */ |
244 | 244 | ||
245 | static void longhaul_setstate(unsigned int table_index) | 245 | static void longhaul_setstate(struct cpufreq_policy *policy, |
246 | unsigned int table_index) | ||
246 | { | 247 | { |
247 | unsigned int mults_index; | 248 | unsigned int mults_index; |
248 | int speed, mult; | 249 | int speed, mult; |
@@ -267,9 +268,8 @@ static void longhaul_setstate(unsigned int table_index) | |||
267 | 268 | ||
268 | freqs.old = calc_speed(longhaul_get_cpu_mult()); | 269 | freqs.old = calc_speed(longhaul_get_cpu_mult()); |
269 | freqs.new = speed; | 270 | freqs.new = speed; |
270 | freqs.cpu = 0; /* longhaul.c is UP only driver */ | ||
271 | 271 | ||
272 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 272 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
273 | 273 | ||
274 | pr_debug("Setting to FSB:%dMHz Mult:%d.%dx (%s)\n", | 274 | pr_debug("Setting to FSB:%dMHz Mult:%d.%dx (%s)\n", |
275 | fsb, mult/10, mult%10, print_speed(speed/1000)); | 275 | fsb, mult/10, mult%10, print_speed(speed/1000)); |
@@ -386,7 +386,7 @@ retry_loop: | |||
386 | } | 386 | } |
387 | } | 387 | } |
388 | /* Report true CPU frequency */ | 388 | /* Report true CPU frequency */ |
389 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 389 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
390 | 390 | ||
391 | if (!bm_timeout) | 391 | if (!bm_timeout) |
392 | printk(KERN_INFO PFX "Warning: Timeout while waiting for " | 392 | printk(KERN_INFO PFX "Warning: Timeout while waiting for " |
@@ -648,7 +648,7 @@ static int longhaul_target(struct cpufreq_policy *policy, | |||
648 | return 0; | 648 | return 0; |
649 | 649 | ||
650 | if (!can_scale_voltage) | 650 | if (!can_scale_voltage) |
651 | longhaul_setstate(table_index); | 651 | longhaul_setstate(policy, table_index); |
652 | else { | 652 | else { |
653 | /* On test system voltage transitions exceeding single | 653 | /* On test system voltage transitions exceeding single |
654 | * step up or down were turning motherboard off. Both | 654 | * step up or down were turning motherboard off. Both |
@@ -663,7 +663,7 @@ static int longhaul_target(struct cpufreq_policy *policy, | |||
663 | while (i != table_index) { | 663 | while (i != table_index) { |
664 | vid = (longhaul_table[i].index >> 8) & 0x1f; | 664 | vid = (longhaul_table[i].index >> 8) & 0x1f; |
665 | if (vid != current_vid) { | 665 | if (vid != current_vid) { |
666 | longhaul_setstate(i); | 666 | longhaul_setstate(policy, i); |
667 | current_vid = vid; | 667 | current_vid = vid; |
668 | msleep(200); | 668 | msleep(200); |
669 | } | 669 | } |
@@ -672,7 +672,7 @@ static int longhaul_target(struct cpufreq_policy *policy, | |||
672 | else | 672 | else |
673 | i--; | 673 | i--; |
674 | } | 674 | } |
675 | longhaul_setstate(table_index); | 675 | longhaul_setstate(policy, table_index); |
676 | } | 676 | } |
677 | longhaul_index = table_index; | 677 | longhaul_index = table_index; |
678 | return 0; | 678 | return 0; |
@@ -998,15 +998,17 @@ static int __init longhaul_init(void) | |||
998 | 998 | ||
999 | static void __exit longhaul_exit(void) | 999 | static void __exit longhaul_exit(void) |
1000 | { | 1000 | { |
1001 | struct cpufreq_policy *policy = cpufreq_cpu_get(0); | ||
1001 | int i; | 1002 | int i; |
1002 | 1003 | ||
1003 | for (i = 0; i < numscales; i++) { | 1004 | for (i = 0; i < numscales; i++) { |
1004 | if (mults[i] == maxmult) { | 1005 | if (mults[i] == maxmult) { |
1005 | longhaul_setstate(i); | 1006 | longhaul_setstate(policy, i); |
1006 | break; | 1007 | break; |
1007 | } | 1008 | } |
1008 | } | 1009 | } |
1009 | 1010 | ||
1011 | cpufreq_cpu_put(policy); | ||
1010 | cpufreq_unregister_driver(&longhaul_driver); | 1012 | cpufreq_unregister_driver(&longhaul_driver); |
1011 | kfree(longhaul_table); | 1013 | kfree(longhaul_table); |
1012 | } | 1014 | } |
diff --git a/arch/mips/kernel/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c index 3237c5235f9c..84889573b566 100644 --- a/arch/mips/kernel/cpufreq/loongson2_cpufreq.c +++ b/drivers/cpufreq/loongson2_cpufreq.c | |||
@@ -61,9 +61,6 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy, | |||
61 | struct cpufreq_freqs freqs; | 61 | struct cpufreq_freqs freqs; |
62 | unsigned int freq; | 62 | unsigned int freq; |
63 | 63 | ||
64 | if (!cpu_online(cpu)) | ||
65 | return -ENODEV; | ||
66 | |||
67 | cpus_allowed = current->cpus_allowed; | 64 | cpus_allowed = current->cpus_allowed; |
68 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 65 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
69 | 66 | ||
@@ -80,7 +77,6 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy, | |||
80 | 77 | ||
81 | pr_debug("cpufreq: requested frequency %u Hz\n", target_freq * 1000); | 78 | pr_debug("cpufreq: requested frequency %u Hz\n", target_freq * 1000); |
82 | 79 | ||
83 | freqs.cpu = cpu; | ||
84 | freqs.old = loongson2_cpufreq_get(cpu); | 80 | freqs.old = loongson2_cpufreq_get(cpu); |
85 | freqs.new = freq; | 81 | freqs.new = freq; |
86 | freqs.flags = 0; | 82 | freqs.flags = 0; |
@@ -89,7 +85,7 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy, | |||
89 | return 0; | 85 | return 0; |
90 | 86 | ||
91 | /* notifiers */ | 87 | /* notifiers */ |
92 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 88 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
93 | 89 | ||
94 | set_cpus_allowed_ptr(current, &cpus_allowed); | 90 | set_cpus_allowed_ptr(current, &cpus_allowed); |
95 | 91 | ||
@@ -97,7 +93,7 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy, | |||
97 | clk_set_rate(cpuclk, freq); | 93 | clk_set_rate(cpuclk, freq); |
98 | 94 | ||
99 | /* notifiers */ | 95 | /* notifiers */ |
100 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 96 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
101 | 97 | ||
102 | pr_debug("cpufreq: set frequency %u kHz\n", freq); | 98 | pr_debug("cpufreq: set frequency %u kHz\n", freq); |
103 | 99 | ||
@@ -110,9 +106,6 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy) | |||
110 | unsigned long rate; | 106 | unsigned long rate; |
111 | int ret; | 107 | int ret; |
112 | 108 | ||
113 | if (!cpu_online(policy->cpu)) | ||
114 | return -ENODEV; | ||
115 | |||
116 | cpuclk = clk_get(NULL, "cpu_clk"); | 109 | cpuclk = clk_get(NULL, "cpu_clk"); |
117 | if (IS_ERR(cpuclk)) { | 110 | if (IS_ERR(cpuclk)) { |
118 | printk(KERN_ERR "cpufreq: couldn't get CPU clk\n"); | 111 | printk(KERN_ERR "cpufreq: couldn't get CPU clk\n"); |
diff --git a/drivers/cpufreq/maple-cpufreq.c b/drivers/cpufreq/maple-cpufreq.c index d4c4989823dc..cdd62915efaf 100644 --- a/drivers/cpufreq/maple-cpufreq.c +++ b/drivers/cpufreq/maple-cpufreq.c | |||
@@ -158,11 +158,10 @@ static int maple_cpufreq_target(struct cpufreq_policy *policy, | |||
158 | 158 | ||
159 | freqs.old = maple_cpu_freqs[maple_pmode_cur].frequency; | 159 | freqs.old = maple_cpu_freqs[maple_pmode_cur].frequency; |
160 | freqs.new = maple_cpu_freqs[newstate].frequency; | 160 | freqs.new = maple_cpu_freqs[newstate].frequency; |
161 | freqs.cpu = 0; | ||
162 | 161 | ||
163 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 162 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
164 | rc = maple_scom_switch_freq(newstate); | 163 | rc = maple_scom_switch_freq(newstate); |
165 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 164 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
166 | 165 | ||
167 | mutex_unlock(&maple_switch_mutex); | 166 | mutex_unlock(&maple_switch_mutex); |
168 | 167 | ||
diff --git a/drivers/cpufreq/omap-cpufreq.c b/drivers/cpufreq/omap-cpufreq.c index 9128c07bafba..0279d18a57f9 100644 --- a/drivers/cpufreq/omap-cpufreq.c +++ b/drivers/cpufreq/omap-cpufreq.c | |||
@@ -25,6 +25,7 @@ | |||
25 | #include <linux/opp.h> | 25 | #include <linux/opp.h> |
26 | #include <linux/cpu.h> | 26 | #include <linux/cpu.h> |
27 | #include <linux/module.h> | 27 | #include <linux/module.h> |
28 | #include <linux/platform_device.h> | ||
28 | #include <linux/regulator/consumer.h> | 29 | #include <linux/regulator/consumer.h> |
29 | 30 | ||
30 | #include <asm/smp_plat.h> | 31 | #include <asm/smp_plat.h> |
@@ -88,16 +89,12 @@ static int omap_target(struct cpufreq_policy *policy, | |||
88 | } | 89 | } |
89 | 90 | ||
90 | freqs.old = omap_getspeed(policy->cpu); | 91 | freqs.old = omap_getspeed(policy->cpu); |
91 | freqs.cpu = policy->cpu; | ||
92 | 92 | ||
93 | if (freqs.old == freqs.new && policy->cur == freqs.new) | 93 | if (freqs.old == freqs.new && policy->cur == freqs.new) |
94 | return ret; | 94 | return ret; |
95 | 95 | ||
96 | /* notifiers */ | 96 | /* notifiers */ |
97 | for_each_cpu(i, policy->cpus) { | 97 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
98 | freqs.cpu = i; | ||
99 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
100 | } | ||
101 | 98 | ||
102 | freq = freqs.new * 1000; | 99 | freq = freqs.new * 1000; |
103 | ret = clk_round_rate(mpu_clk, freq); | 100 | ret = clk_round_rate(mpu_clk, freq); |
@@ -157,10 +154,7 @@ static int omap_target(struct cpufreq_policy *policy, | |||
157 | 154 | ||
158 | done: | 155 | done: |
159 | /* notifiers */ | 156 | /* notifiers */ |
160 | for_each_cpu(i, policy->cpus) { | 157 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
161 | freqs.cpu = i; | ||
162 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
163 | } | ||
164 | 158 | ||
165 | return ret; | 159 | return ret; |
166 | } | 160 | } |
@@ -184,7 +178,7 @@ static int __cpuinit omap_cpu_init(struct cpufreq_policy *policy) | |||
184 | goto fail_ck; | 178 | goto fail_ck; |
185 | } | 179 | } |
186 | 180 | ||
187 | policy->cur = policy->min = policy->max = omap_getspeed(policy->cpu); | 181 | policy->cur = omap_getspeed(policy->cpu); |
188 | 182 | ||
189 | if (!freq_table) | 183 | if (!freq_table) |
190 | result = opp_init_cpufreq_table(mpu_dev, &freq_table); | 184 | result = opp_init_cpufreq_table(mpu_dev, &freq_table); |
@@ -203,8 +197,6 @@ static int __cpuinit omap_cpu_init(struct cpufreq_policy *policy) | |||
203 | 197 | ||
204 | cpufreq_frequency_table_get_attr(freq_table, policy->cpu); | 198 | cpufreq_frequency_table_get_attr(freq_table, policy->cpu); |
205 | 199 | ||
206 | policy->min = policy->cpuinfo.min_freq; | ||
207 | policy->max = policy->cpuinfo.max_freq; | ||
208 | policy->cur = omap_getspeed(policy->cpu); | 200 | policy->cur = omap_getspeed(policy->cpu); |
209 | 201 | ||
210 | /* | 202 | /* |
@@ -252,7 +244,7 @@ static struct cpufreq_driver omap_driver = { | |||
252 | .attr = omap_cpufreq_attr, | 244 | .attr = omap_cpufreq_attr, |
253 | }; | 245 | }; |
254 | 246 | ||
255 | static int __init omap_cpufreq_init(void) | 247 | static int omap_cpufreq_probe(struct platform_device *pdev) |
256 | { | 248 | { |
257 | mpu_dev = get_cpu_device(0); | 249 | mpu_dev = get_cpu_device(0); |
258 | if (!mpu_dev) { | 250 | if (!mpu_dev) { |
@@ -280,12 +272,20 @@ static int __init omap_cpufreq_init(void) | |||
280 | return cpufreq_register_driver(&omap_driver); | 272 | return cpufreq_register_driver(&omap_driver); |
281 | } | 273 | } |
282 | 274 | ||
283 | static void __exit omap_cpufreq_exit(void) | 275 | static int omap_cpufreq_remove(struct platform_device *pdev) |
284 | { | 276 | { |
285 | cpufreq_unregister_driver(&omap_driver); | 277 | return cpufreq_unregister_driver(&omap_driver); |
286 | } | 278 | } |
287 | 279 | ||
280 | static struct platform_driver omap_cpufreq_platdrv = { | ||
281 | .driver = { | ||
282 | .name = "omap-cpufreq", | ||
283 | .owner = THIS_MODULE, | ||
284 | }, | ||
285 | .probe = omap_cpufreq_probe, | ||
286 | .remove = omap_cpufreq_remove, | ||
287 | }; | ||
288 | module_platform_driver(omap_cpufreq_platdrv); | ||
289 | |||
288 | MODULE_DESCRIPTION("cpufreq driver for OMAP SoCs"); | 290 | MODULE_DESCRIPTION("cpufreq driver for OMAP SoCs"); |
289 | MODULE_LICENSE("GPL"); | 291 | MODULE_LICENSE("GPL"); |
290 | module_init(omap_cpufreq_init); | ||
291 | module_exit(omap_cpufreq_exit); | ||
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c index 827629c9aad7..421ef37d0bb3 100644 --- a/drivers/cpufreq/p4-clockmod.c +++ b/drivers/cpufreq/p4-clockmod.c | |||
@@ -58,8 +58,7 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate) | |||
58 | { | 58 | { |
59 | u32 l, h; | 59 | u32 l, h; |
60 | 60 | ||
61 | if (!cpu_online(cpu) || | 61 | if ((newstate > DC_DISABLE) || (newstate == DC_RESV)) |
62 | (newstate > DC_DISABLE) || (newstate == DC_RESV)) | ||
63 | return -EINVAL; | 62 | return -EINVAL; |
64 | 63 | ||
65 | rdmsr_on_cpu(cpu, MSR_IA32_THERM_STATUS, &l, &h); | 64 | rdmsr_on_cpu(cpu, MSR_IA32_THERM_STATUS, &l, &h); |
@@ -125,10 +124,7 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy, | |||
125 | return 0; | 124 | return 0; |
126 | 125 | ||
127 | /* notifiers */ | 126 | /* notifiers */ |
128 | for_each_cpu(i, policy->cpus) { | 127 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
129 | freqs.cpu = i; | ||
130 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
131 | } | ||
132 | 128 | ||
133 | /* run on each logical CPU, | 129 | /* run on each logical CPU, |
134 | * see section 13.15.3 of IA32 Intel Architecture Software | 130 | * see section 13.15.3 of IA32 Intel Architecture Software |
@@ -138,10 +134,7 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy, | |||
138 | cpufreq_p4_setdc(i, p4clockmod_table[newstate].index); | 134 | cpufreq_p4_setdc(i, p4clockmod_table[newstate].index); |
139 | 135 | ||
140 | /* notifiers */ | 136 | /* notifiers */ |
141 | for_each_cpu(i, policy->cpus) { | 137 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
142 | freqs.cpu = i; | ||
143 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
144 | } | ||
145 | 138 | ||
146 | return 0; | 139 | return 0; |
147 | } | 140 | } |
diff --git a/drivers/cpufreq/pcc-cpufreq.c b/drivers/cpufreq/pcc-cpufreq.c index 503996a94a6a..0de00081a81e 100644 --- a/drivers/cpufreq/pcc-cpufreq.c +++ b/drivers/cpufreq/pcc-cpufreq.c | |||
@@ -215,8 +215,7 @@ static int pcc_cpufreq_target(struct cpufreq_policy *policy, | |||
215 | (pcch_virt_addr + pcc_cpu_data->input_offset)); | 215 | (pcch_virt_addr + pcc_cpu_data->input_offset)); |
216 | 216 | ||
217 | freqs.new = target_freq; | 217 | freqs.new = target_freq; |
218 | freqs.cpu = cpu; | 218 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
219 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
220 | 219 | ||
221 | input_buffer = 0x1 | (((target_freq * 100) | 220 | input_buffer = 0x1 | (((target_freq * 100) |
222 | / (ioread32(&pcch_hdr->nominal) * 1000)) << 8); | 221 | / (ioread32(&pcch_hdr->nominal) * 1000)) << 8); |
@@ -237,7 +236,7 @@ static int pcc_cpufreq_target(struct cpufreq_policy *policy, | |||
237 | } | 236 | } |
238 | iowrite16(0, &pcch_hdr->status); | 237 | iowrite16(0, &pcch_hdr->status); |
239 | 238 | ||
240 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 239 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
241 | pr_debug("target: was SUCCESSFUL for cpu %d\n", cpu); | 240 | pr_debug("target: was SUCCESSFUL for cpu %d\n", cpu); |
242 | spin_unlock(&pcc_lock); | 241 | spin_unlock(&pcc_lock); |
243 | 242 | ||
diff --git a/drivers/cpufreq/powernow-k6.c b/drivers/cpufreq/powernow-k6.c index af23e0b9ec92..ea0222a45b7b 100644 --- a/drivers/cpufreq/powernow-k6.c +++ b/drivers/cpufreq/powernow-k6.c | |||
@@ -68,7 +68,8 @@ static int powernow_k6_get_cpu_multiplier(void) | |||
68 | * | 68 | * |
69 | * Tries to change the PowerNow! multiplier | 69 | * Tries to change the PowerNow! multiplier |
70 | */ | 70 | */ |
71 | static void powernow_k6_set_state(unsigned int best_i) | 71 | static void powernow_k6_set_state(struct cpufreq_policy *policy, |
72 | unsigned int best_i) | ||
72 | { | 73 | { |
73 | unsigned long outvalue = 0, invalue = 0; | 74 | unsigned long outvalue = 0, invalue = 0; |
74 | unsigned long msrval; | 75 | unsigned long msrval; |
@@ -81,9 +82,8 @@ static void powernow_k6_set_state(unsigned int best_i) | |||
81 | 82 | ||
82 | freqs.old = busfreq * powernow_k6_get_cpu_multiplier(); | 83 | freqs.old = busfreq * powernow_k6_get_cpu_multiplier(); |
83 | freqs.new = busfreq * clock_ratio[best_i].index; | 84 | freqs.new = busfreq * clock_ratio[best_i].index; |
84 | freqs.cpu = 0; /* powernow-k6.c is UP only driver */ | ||
85 | 85 | ||
86 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 86 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
87 | 87 | ||
88 | /* we now need to transform best_i to the BVC format, see AMD#23446 */ | 88 | /* we now need to transform best_i to the BVC format, see AMD#23446 */ |
89 | 89 | ||
@@ -98,7 +98,7 @@ static void powernow_k6_set_state(unsigned int best_i) | |||
98 | msrval = POWERNOW_IOPORT + 0x0; | 98 | msrval = POWERNOW_IOPORT + 0x0; |
99 | wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ | 99 | wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ |
100 | 100 | ||
101 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 101 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
102 | 102 | ||
103 | return; | 103 | return; |
104 | } | 104 | } |
@@ -136,7 +136,7 @@ static int powernow_k6_target(struct cpufreq_policy *policy, | |||
136 | target_freq, relation, &newstate)) | 136 | target_freq, relation, &newstate)) |
137 | return -EINVAL; | 137 | return -EINVAL; |
138 | 138 | ||
139 | powernow_k6_set_state(newstate); | 139 | powernow_k6_set_state(policy, newstate); |
140 | 140 | ||
141 | return 0; | 141 | return 0; |
142 | } | 142 | } |
@@ -182,7 +182,7 @@ static int powernow_k6_cpu_exit(struct cpufreq_policy *policy) | |||
182 | unsigned int i; | 182 | unsigned int i; |
183 | for (i = 0; i < 8; i++) { | 183 | for (i = 0; i < 8; i++) { |
184 | if (i == max_multiplier) | 184 | if (i == max_multiplier) |
185 | powernow_k6_set_state(i); | 185 | powernow_k6_set_state(policy, i); |
186 | } | 186 | } |
187 | cpufreq_frequency_table_put_attr(policy->cpu); | 187 | cpufreq_frequency_table_put_attr(policy->cpu); |
188 | return 0; | 188 | return 0; |
diff --git a/drivers/cpufreq/powernow-k7.c b/drivers/cpufreq/powernow-k7.c index 334cc2f1e9f1..53888dacbe58 100644 --- a/drivers/cpufreq/powernow-k7.c +++ b/drivers/cpufreq/powernow-k7.c | |||
@@ -248,7 +248,7 @@ static void change_VID(int vid) | |||
248 | } | 248 | } |
249 | 249 | ||
250 | 250 | ||
251 | static void change_speed(unsigned int index) | 251 | static void change_speed(struct cpufreq_policy *policy, unsigned int index) |
252 | { | 252 | { |
253 | u8 fid, vid; | 253 | u8 fid, vid; |
254 | struct cpufreq_freqs freqs; | 254 | struct cpufreq_freqs freqs; |
@@ -263,15 +263,13 @@ static void change_speed(unsigned int index) | |||
263 | fid = powernow_table[index].index & 0xFF; | 263 | fid = powernow_table[index].index & 0xFF; |
264 | vid = (powernow_table[index].index & 0xFF00) >> 8; | 264 | vid = (powernow_table[index].index & 0xFF00) >> 8; |
265 | 265 | ||
266 | freqs.cpu = 0; | ||
267 | |||
268 | rdmsrl(MSR_K7_FID_VID_STATUS, fidvidstatus.val); | 266 | rdmsrl(MSR_K7_FID_VID_STATUS, fidvidstatus.val); |
269 | cfid = fidvidstatus.bits.CFID; | 267 | cfid = fidvidstatus.bits.CFID; |
270 | freqs.old = fsb * fid_codes[cfid] / 10; | 268 | freqs.old = fsb * fid_codes[cfid] / 10; |
271 | 269 | ||
272 | freqs.new = powernow_table[index].frequency; | 270 | freqs.new = powernow_table[index].frequency; |
273 | 271 | ||
274 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 272 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
275 | 273 | ||
276 | /* Now do the magic poking into the MSRs. */ | 274 | /* Now do the magic poking into the MSRs. */ |
277 | 275 | ||
@@ -292,7 +290,7 @@ static void change_speed(unsigned int index) | |||
292 | if (have_a0 == 1) | 290 | if (have_a0 == 1) |
293 | local_irq_enable(); | 291 | local_irq_enable(); |
294 | 292 | ||
295 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 293 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
296 | } | 294 | } |
297 | 295 | ||
298 | 296 | ||
@@ -546,7 +544,7 @@ static int powernow_target(struct cpufreq_policy *policy, | |||
546 | relation, &newstate)) | 544 | relation, &newstate)) |
547 | return -EINVAL; | 545 | return -EINVAL; |
548 | 546 | ||
549 | change_speed(newstate); | 547 | change_speed(policy, newstate); |
550 | 548 | ||
551 | return 0; | 549 | return 0; |
552 | } | 550 | } |
diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c index d13a13678b5f..b828efe4b2f8 100644 --- a/drivers/cpufreq/powernow-k8.c +++ b/drivers/cpufreq/powernow-k8.c | |||
@@ -928,9 +928,10 @@ static int get_transition_latency(struct powernow_k8_data *data) | |||
928 | static int transition_frequency_fidvid(struct powernow_k8_data *data, | 928 | static int transition_frequency_fidvid(struct powernow_k8_data *data, |
929 | unsigned int index) | 929 | unsigned int index) |
930 | { | 930 | { |
931 | struct cpufreq_policy *policy; | ||
931 | u32 fid = 0; | 932 | u32 fid = 0; |
932 | u32 vid = 0; | 933 | u32 vid = 0; |
933 | int res, i; | 934 | int res; |
934 | struct cpufreq_freqs freqs; | 935 | struct cpufreq_freqs freqs; |
935 | 936 | ||
936 | pr_debug("cpu %d transition to index %u\n", smp_processor_id(), index); | 937 | pr_debug("cpu %d transition to index %u\n", smp_processor_id(), index); |
@@ -959,10 +960,10 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data, | |||
959 | freqs.old = find_khz_freq_from_fid(data->currfid); | 960 | freqs.old = find_khz_freq_from_fid(data->currfid); |
960 | freqs.new = find_khz_freq_from_fid(fid); | 961 | freqs.new = find_khz_freq_from_fid(fid); |
961 | 962 | ||
962 | for_each_cpu(i, data->available_cores) { | 963 | policy = cpufreq_cpu_get(smp_processor_id()); |
963 | freqs.cpu = i; | 964 | cpufreq_cpu_put(policy); |
964 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 965 | |
965 | } | 966 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
966 | 967 | ||
967 | res = transition_fid_vid(data, fid, vid); | 968 | res = transition_fid_vid(data, fid, vid); |
968 | if (res) | 969 | if (res) |
@@ -970,10 +971,7 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data, | |||
970 | 971 | ||
971 | freqs.new = find_khz_freq_from_fid(data->currfid); | 972 | freqs.new = find_khz_freq_from_fid(data->currfid); |
972 | 973 | ||
973 | for_each_cpu(i, data->available_cores) { | 974 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
974 | freqs.cpu = i; | ||
975 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
976 | } | ||
977 | return res; | 975 | return res; |
978 | } | 976 | } |
979 | 977 | ||
@@ -1104,9 +1102,6 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) | |||
1104 | struct init_on_cpu init_on_cpu; | 1102 | struct init_on_cpu init_on_cpu; |
1105 | int rc; | 1103 | int rc; |
1106 | 1104 | ||
1107 | if (!cpu_online(pol->cpu)) | ||
1108 | return -ENODEV; | ||
1109 | |||
1110 | smp_call_function_single(pol->cpu, check_supported_cpu, &rc, 1); | 1105 | smp_call_function_single(pol->cpu, check_supported_cpu, &rc, 1); |
1111 | if (rc) | 1106 | if (rc) |
1112 | return -ENODEV; | 1107 | return -ENODEV; |
diff --git a/arch/powerpc/platforms/cell/cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c index d4c39e32f147..e577a1dbbfcd 100644 --- a/arch/powerpc/platforms/cell/cbe_cpufreq.c +++ b/drivers/cpufreq/ppc_cbe_cpufreq.c | |||
@@ -27,7 +27,8 @@ | |||
27 | #include <asm/machdep.h> | 27 | #include <asm/machdep.h> |
28 | #include <asm/prom.h> | 28 | #include <asm/prom.h> |
29 | #include <asm/cell-regs.h> | 29 | #include <asm/cell-regs.h> |
30 | #include "cbe_cpufreq.h" | 30 | |
31 | #include "ppc_cbe_cpufreq.h" | ||
31 | 32 | ||
32 | static DEFINE_MUTEX(cbe_switch_mutex); | 33 | static DEFINE_MUTEX(cbe_switch_mutex); |
33 | 34 | ||
@@ -156,10 +157,9 @@ static int cbe_cpufreq_target(struct cpufreq_policy *policy, | |||
156 | 157 | ||
157 | freqs.old = policy->cur; | 158 | freqs.old = policy->cur; |
158 | freqs.new = cbe_freqs[cbe_pmode_new].frequency; | 159 | freqs.new = cbe_freqs[cbe_pmode_new].frequency; |
159 | freqs.cpu = policy->cpu; | ||
160 | 160 | ||
161 | mutex_lock(&cbe_switch_mutex); | 161 | mutex_lock(&cbe_switch_mutex); |
162 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 162 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
163 | 163 | ||
164 | pr_debug("setting frequency for cpu %d to %d kHz, " \ | 164 | pr_debug("setting frequency for cpu %d to %d kHz, " \ |
165 | "1/%d of max frequency\n", | 165 | "1/%d of max frequency\n", |
@@ -169,7 +169,7 @@ static int cbe_cpufreq_target(struct cpufreq_policy *policy, | |||
169 | 169 | ||
170 | rc = set_pmode(policy->cpu, cbe_pmode_new); | 170 | rc = set_pmode(policy->cpu, cbe_pmode_new); |
171 | 171 | ||
172 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 172 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
173 | mutex_unlock(&cbe_switch_mutex); | 173 | mutex_unlock(&cbe_switch_mutex); |
174 | 174 | ||
175 | return rc; | 175 | return rc; |
diff --git a/arch/powerpc/platforms/cell/cbe_cpufreq.h b/drivers/cpufreq/ppc_cbe_cpufreq.h index c1d86bfa92ff..b4c00a5a6a59 100644 --- a/arch/powerpc/platforms/cell/cbe_cpufreq.h +++ b/drivers/cpufreq/ppc_cbe_cpufreq.h | |||
@@ -1,5 +1,5 @@ | |||
1 | /* | 1 | /* |
2 | * cbe_cpufreq.h | 2 | * ppc_cbe_cpufreq.h |
3 | * | 3 | * |
4 | * This file contains the definitions used by the cbe_cpufreq driver. | 4 | * This file contains the definitions used by the cbe_cpufreq driver. |
5 | * | 5 | * |
@@ -17,7 +17,7 @@ int cbe_cpufreq_get_pmode(int cpu); | |||
17 | 17 | ||
18 | int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode); | 18 | int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode); |
19 | 19 | ||
20 | #if defined(CONFIG_CBE_CPUFREQ_PMI) || defined(CONFIG_CBE_CPUFREQ_PMI_MODULE) | 20 | #if defined(CONFIG_CPU_FREQ_CBE_PMI) || defined(CONFIG_CPU_FREQ_CBE_PMI_MODULE) |
21 | extern bool cbe_cpufreq_has_pmi; | 21 | extern bool cbe_cpufreq_has_pmi; |
22 | #else | 22 | #else |
23 | #define cbe_cpufreq_has_pmi (0) | 23 | #define cbe_cpufreq_has_pmi (0) |
diff --git a/arch/powerpc/platforms/cell/cbe_cpufreq_pervasive.c b/drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c index 20472e487b6f..84d2f2cf5ba7 100644 --- a/arch/powerpc/platforms/cell/cbe_cpufreq_pervasive.c +++ b/drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c | |||
@@ -30,7 +30,7 @@ | |||
30 | #include <asm/hw_irq.h> | 30 | #include <asm/hw_irq.h> |
31 | #include <asm/cell-regs.h> | 31 | #include <asm/cell-regs.h> |
32 | 32 | ||
33 | #include "cbe_cpufreq.h" | 33 | #include "ppc_cbe_cpufreq.h" |
34 | 34 | ||
35 | /* to write to MIC register */ | 35 | /* to write to MIC register */ |
36 | static u64 MIC_Slow_Fast_Timer_table[] = { | 36 | static u64 MIC_Slow_Fast_Timer_table[] = { |
diff --git a/arch/powerpc/platforms/cell/cbe_cpufreq_pmi.c b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c index 60a07a4f9326..d29e8da396a0 100644 --- a/arch/powerpc/platforms/cell/cbe_cpufreq_pmi.c +++ b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c | |||
@@ -35,7 +35,7 @@ | |||
35 | #include <asm/time.h> | 35 | #include <asm/time.h> |
36 | #endif | 36 | #endif |
37 | 37 | ||
38 | #include "cbe_cpufreq.h" | 38 | #include "ppc_cbe_cpufreq.h" |
39 | 39 | ||
40 | static u8 pmi_slow_mode_limit[MAX_CBE]; | 40 | static u8 pmi_slow_mode_limit[MAX_CBE]; |
41 | 41 | ||
diff --git a/arch/arm/mach-pxa/cpufreq-pxa2xx.c b/drivers/cpufreq/pxa2xx-cpufreq.c index 6a7aeab42f6c..9e5bc8e388a0 100644 --- a/arch/arm/mach-pxa/cpufreq-pxa2xx.c +++ b/drivers/cpufreq/pxa2xx-cpufreq.c | |||
@@ -1,6 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * linux/arch/arm/mach-pxa/cpufreq-pxa2xx.c | ||
3 | * | ||
4 | * Copyright (C) 2002,2003 Intrinsyc Software | 2 | * Copyright (C) 2002,2003 Intrinsyc Software |
5 | * | 3 | * |
6 | * This program is free software; you can redistribute it and/or modify | 4 | * This program is free software; you can redistribute it and/or modify |
@@ -223,10 +221,11 @@ static void find_freq_tables(struct cpufreq_frequency_table **freq_table, | |||
223 | *pxa_freqs = pxa255_turbo_freqs; | 221 | *pxa_freqs = pxa255_turbo_freqs; |
224 | *freq_table = pxa255_turbo_freq_table; | 222 | *freq_table = pxa255_turbo_freq_table; |
225 | } | 223 | } |
226 | } | 224 | } else if (cpu_is_pxa27x()) { |
227 | if (cpu_is_pxa27x()) { | ||
228 | *pxa_freqs = pxa27x_freqs; | 225 | *pxa_freqs = pxa27x_freqs; |
229 | *freq_table = pxa27x_freq_table; | 226 | *freq_table = pxa27x_freq_table; |
227 | } else { | ||
228 | BUG(); | ||
230 | } | 229 | } |
231 | } | 230 | } |
232 | 231 | ||
@@ -311,7 +310,6 @@ static int pxa_set_target(struct cpufreq_policy *policy, | |||
311 | new_freq_mem = pxa_freq_settings[idx].membus; | 310 | new_freq_mem = pxa_freq_settings[idx].membus; |
312 | freqs.old = policy->cur; | 311 | freqs.old = policy->cur; |
313 | freqs.new = new_freq_cpu; | 312 | freqs.new = new_freq_cpu; |
314 | freqs.cpu = policy->cpu; | ||
315 | 313 | ||
316 | if (freq_debug) | 314 | if (freq_debug) |
317 | pr_debug("Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n", | 315 | pr_debug("Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n", |
@@ -327,7 +325,7 @@ static int pxa_set_target(struct cpufreq_policy *policy, | |||
327 | * you should add a notify client with any platform specific | 325 | * you should add a notify client with any platform specific |
328 | * Vcc changing capability | 326 | * Vcc changing capability |
329 | */ | 327 | */ |
330 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 328 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
331 | 329 | ||
332 | /* Calculate the next MDREFR. If we're slowing down the SDRAM clock | 330 | /* Calculate the next MDREFR. If we're slowing down the SDRAM clock |
333 | * we need to preset the smaller DRI before the change. If we're | 331 | * we need to preset the smaller DRI before the change. If we're |
@@ -382,7 +380,7 @@ static int pxa_set_target(struct cpufreq_policy *policy, | |||
382 | * you should add a notify client with any platform specific | 380 | * you should add a notify client with any platform specific |
383 | * SDRAM refresh timer adjustments | 381 | * SDRAM refresh timer adjustments |
384 | */ | 382 | */ |
385 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 383 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
386 | 384 | ||
387 | /* | 385 | /* |
388 | * Even if voltage setting fails, we don't report it, as the frequency | 386 | * Even if voltage setting fails, we don't report it, as the frequency |
diff --git a/arch/arm/mach-pxa/cpufreq-pxa3xx.c b/drivers/cpufreq/pxa3xx-cpufreq.c index b85b4ab7aac6..15d60f857ad5 100644 --- a/arch/arm/mach-pxa/cpufreq-pxa3xx.c +++ b/drivers/cpufreq/pxa3xx-cpufreq.c | |||
@@ -1,6 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * linux/arch/arm/mach-pxa/cpufreq-pxa3xx.c | ||
3 | * | ||
4 | * Copyright (C) 2008 Marvell International Ltd. | 2 | * Copyright (C) 2008 Marvell International Ltd. |
5 | * | 3 | * |
6 | * This program is free software; you can redistribute it and/or modify | 4 | * This program is free software; you can redistribute it and/or modify |
@@ -17,10 +15,9 @@ | |||
17 | #include <linux/slab.h> | 15 | #include <linux/slab.h> |
18 | #include <linux/io.h> | 16 | #include <linux/io.h> |
19 | 17 | ||
18 | #include <mach/generic.h> | ||
20 | #include <mach/pxa3xx-regs.h> | 19 | #include <mach/pxa3xx-regs.h> |
21 | 20 | ||
22 | #include "generic.h" | ||
23 | |||
24 | #define HSS_104M (0) | 21 | #define HSS_104M (0) |
25 | #define HSS_156M (1) | 22 | #define HSS_156M (1) |
26 | #define HSS_208M (2) | 23 | #define HSS_208M (2) |
@@ -184,7 +181,6 @@ static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy, | |||
184 | 181 | ||
185 | freqs.old = policy->cur; | 182 | freqs.old = policy->cur; |
186 | freqs.new = next->cpufreq_mhz * 1000; | 183 | freqs.new = next->cpufreq_mhz * 1000; |
187 | freqs.cpu = policy->cpu; | ||
188 | 184 | ||
189 | pr_debug("CPU frequency from %d MHz to %d MHz%s\n", | 185 | pr_debug("CPU frequency from %d MHz to %d MHz%s\n", |
190 | freqs.old / 1000, freqs.new / 1000, | 186 | freqs.old / 1000, freqs.new / 1000, |
@@ -193,14 +189,14 @@ static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy, | |||
193 | if (freqs.old == target_freq) | 189 | if (freqs.old == target_freq) |
194 | return 0; | 190 | return 0; |
195 | 191 | ||
196 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 192 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
197 | 193 | ||
198 | local_irq_save(flags); | 194 | local_irq_save(flags); |
199 | __update_core_freq(next); | 195 | __update_core_freq(next); |
200 | __update_bus_freq(next); | 196 | __update_bus_freq(next); |
201 | local_irq_restore(flags); | 197 | local_irq_restore(flags); |
202 | 198 | ||
203 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 199 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
204 | 200 | ||
205 | return 0; | 201 | return 0; |
206 | } | 202 | } |
diff --git a/drivers/cpufreq/s3c2416-cpufreq.c b/drivers/cpufreq/s3c2416-cpufreq.c index bcc053bc02c4..4f1881eee3f1 100644 --- a/drivers/cpufreq/s3c2416-cpufreq.c +++ b/drivers/cpufreq/s3c2416-cpufreq.c | |||
@@ -256,7 +256,6 @@ static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy, | |||
256 | goto out; | 256 | goto out; |
257 | } | 257 | } |
258 | 258 | ||
259 | freqs.cpu = 0; | ||
260 | freqs.flags = 0; | 259 | freqs.flags = 0; |
261 | freqs.old = s3c_freq->is_dvs ? FREQ_DVS | 260 | freqs.old = s3c_freq->is_dvs ? FREQ_DVS |
262 | : clk_get_rate(s3c_freq->armclk) / 1000; | 261 | : clk_get_rate(s3c_freq->armclk) / 1000; |
@@ -274,7 +273,7 @@ static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy, | |||
274 | if (!to_dvs && freqs.old == freqs.new) | 273 | if (!to_dvs && freqs.old == freqs.new) |
275 | goto out; | 274 | goto out; |
276 | 275 | ||
277 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 276 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
278 | 277 | ||
279 | if (to_dvs) { | 278 | if (to_dvs) { |
280 | pr_debug("cpufreq: enter dvs\n"); | 279 | pr_debug("cpufreq: enter dvs\n"); |
@@ -287,7 +286,7 @@ static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy, | |||
287 | ret = s3c2416_cpufreq_set_armdiv(s3c_freq, freqs.new); | 286 | ret = s3c2416_cpufreq_set_armdiv(s3c_freq, freqs.new); |
288 | } | 287 | } |
289 | 288 | ||
290 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 289 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
291 | 290 | ||
292 | out: | 291 | out: |
293 | mutex_unlock(&cpufreq_lock); | 292 | mutex_unlock(&cpufreq_lock); |
diff --git a/drivers/cpufreq/s3c64xx-cpufreq.c b/drivers/cpufreq/s3c64xx-cpufreq.c index 6f9490b3c356..27cacb524796 100644 --- a/drivers/cpufreq/s3c64xx-cpufreq.c +++ b/drivers/cpufreq/s3c64xx-cpufreq.c | |||
@@ -84,7 +84,6 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy, | |||
84 | if (ret != 0) | 84 | if (ret != 0) |
85 | return ret; | 85 | return ret; |
86 | 86 | ||
87 | freqs.cpu = 0; | ||
88 | freqs.old = clk_get_rate(armclk) / 1000; | 87 | freqs.old = clk_get_rate(armclk) / 1000; |
89 | freqs.new = s3c64xx_freq_table[i].frequency; | 88 | freqs.new = s3c64xx_freq_table[i].frequency; |
90 | freqs.flags = 0; | 89 | freqs.flags = 0; |
@@ -95,7 +94,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy, | |||
95 | 94 | ||
96 | pr_debug("Transition %d-%dkHz\n", freqs.old, freqs.new); | 95 | pr_debug("Transition %d-%dkHz\n", freqs.old, freqs.new); |
97 | 96 | ||
98 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 97 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
99 | 98 | ||
100 | #ifdef CONFIG_REGULATOR | 99 | #ifdef CONFIG_REGULATOR |
101 | if (vddarm && freqs.new > freqs.old) { | 100 | if (vddarm && freqs.new > freqs.old) { |
@@ -117,7 +116,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy, | |||
117 | goto err; | 116 | goto err; |
118 | } | 117 | } |
119 | 118 | ||
120 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 119 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
121 | 120 | ||
122 | #ifdef CONFIG_REGULATOR | 121 | #ifdef CONFIG_REGULATOR |
123 | if (vddarm && freqs.new < freqs.old) { | 122 | if (vddarm && freqs.new < freqs.old) { |
@@ -141,7 +140,7 @@ err_clk: | |||
141 | if (clk_set_rate(armclk, freqs.old * 1000) < 0) | 140 | if (clk_set_rate(armclk, freqs.old * 1000) < 0) |
142 | pr_err("Failed to restore original clock rate\n"); | 141 | pr_err("Failed to restore original clock rate\n"); |
143 | err: | 142 | err: |
144 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 143 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
145 | 144 | ||
146 | return ret; | 145 | return ret; |
147 | } | 146 | } |
diff --git a/drivers/cpufreq/s5pv210-cpufreq.c b/drivers/cpufreq/s5pv210-cpufreq.c index a484aaea9809..5c7757073793 100644 --- a/drivers/cpufreq/s5pv210-cpufreq.c +++ b/drivers/cpufreq/s5pv210-cpufreq.c | |||
@@ -229,7 +229,6 @@ static int s5pv210_target(struct cpufreq_policy *policy, | |||
229 | } | 229 | } |
230 | 230 | ||
231 | freqs.new = s5pv210_freq_table[index].frequency; | 231 | freqs.new = s5pv210_freq_table[index].frequency; |
232 | freqs.cpu = 0; | ||
233 | 232 | ||
234 | if (freqs.new == freqs.old) | 233 | if (freqs.new == freqs.old) |
235 | goto exit; | 234 | goto exit; |
@@ -256,7 +255,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, | |||
256 | goto exit; | 255 | goto exit; |
257 | } | 256 | } |
258 | 257 | ||
259 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 258 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
260 | 259 | ||
261 | /* Check if there need to change PLL */ | 260 | /* Check if there need to change PLL */ |
262 | if ((index == L0) || (priv_index == L0)) | 261 | if ((index == L0) || (priv_index == L0)) |
@@ -468,7 +467,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, | |||
468 | } | 467 | } |
469 | } | 468 | } |
470 | 469 | ||
471 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 470 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
472 | 471 | ||
473 | if (freqs.new < freqs.old) { | 472 | if (freqs.new < freqs.old) { |
474 | regulator_set_voltage(int_regulator, | 473 | regulator_set_voltage(int_regulator, |
diff --git a/arch/arm/mach-sa1100/cpu-sa1100.c b/drivers/cpufreq/sa1100-cpufreq.c index e8f4d1e19233..cff18e87ca58 100644 --- a/arch/arm/mach-sa1100/cpu-sa1100.c +++ b/drivers/cpufreq/sa1100-cpufreq.c | |||
@@ -91,10 +91,9 @@ | |||
91 | 91 | ||
92 | #include <asm/cputype.h> | 92 | #include <asm/cputype.h> |
93 | 93 | ||
94 | #include <mach/generic.h> | ||
94 | #include <mach/hardware.h> | 95 | #include <mach/hardware.h> |
95 | 96 | ||
96 | #include "generic.h" | ||
97 | |||
98 | struct sa1100_dram_regs { | 97 | struct sa1100_dram_regs { |
99 | int speed; | 98 | int speed; |
100 | u32 mdcnfg; | 99 | u32 mdcnfg; |
@@ -201,9 +200,8 @@ static int sa1100_target(struct cpufreq_policy *policy, | |||
201 | 200 | ||
202 | freqs.old = cur; | 201 | freqs.old = cur; |
203 | freqs.new = sa11x0_ppcr_to_freq(new_ppcr); | 202 | freqs.new = sa11x0_ppcr_to_freq(new_ppcr); |
204 | freqs.cpu = 0; | ||
205 | 203 | ||
206 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 204 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
207 | 205 | ||
208 | if (freqs.new > cur) | 206 | if (freqs.new > cur) |
209 | sa1100_update_dram_timings(cur, freqs.new); | 207 | sa1100_update_dram_timings(cur, freqs.new); |
@@ -213,7 +211,7 @@ static int sa1100_target(struct cpufreq_policy *policy, | |||
213 | if (freqs.new < cur) | 211 | if (freqs.new < cur) |
214 | sa1100_update_dram_timings(cur, freqs.new); | 212 | sa1100_update_dram_timings(cur, freqs.new); |
215 | 213 | ||
216 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 214 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
217 | 215 | ||
218 | return 0; | 216 | return 0; |
219 | } | 217 | } |
diff --git a/arch/arm/mach-sa1100/cpu-sa1110.c b/drivers/cpufreq/sa1110-cpufreq.c index 48c45b0c92bb..39c90b6f4286 100644 --- a/arch/arm/mach-sa1100/cpu-sa1110.c +++ b/drivers/cpufreq/sa1110-cpufreq.c | |||
@@ -27,10 +27,9 @@ | |||
27 | #include <asm/cputype.h> | 27 | #include <asm/cputype.h> |
28 | #include <asm/mach-types.h> | 28 | #include <asm/mach-types.h> |
29 | 29 | ||
30 | #include <mach/generic.h> | ||
30 | #include <mach/hardware.h> | 31 | #include <mach/hardware.h> |
31 | 32 | ||
32 | #include "generic.h" | ||
33 | |||
34 | #undef DEBUG | 33 | #undef DEBUG |
35 | 34 | ||
36 | struct sdram_params { | 35 | struct sdram_params { |
@@ -258,7 +257,6 @@ static int sa1110_target(struct cpufreq_policy *policy, | |||
258 | 257 | ||
259 | freqs.old = sa11x0_getspeed(0); | 258 | freqs.old = sa11x0_getspeed(0); |
260 | freqs.new = sa11x0_ppcr_to_freq(ppcr); | 259 | freqs.new = sa11x0_ppcr_to_freq(ppcr); |
261 | freqs.cpu = 0; | ||
262 | 260 | ||
263 | sdram_calculate_timing(&sd, freqs.new, sdram); | 261 | sdram_calculate_timing(&sd, freqs.new, sdram); |
264 | 262 | ||
@@ -279,7 +277,7 @@ static int sa1110_target(struct cpufreq_policy *policy, | |||
279 | sd.mdcas[2] = 0xaaaaaaaa; | 277 | sd.mdcas[2] = 0xaaaaaaaa; |
280 | #endif | 278 | #endif |
281 | 279 | ||
282 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 280 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
283 | 281 | ||
284 | /* | 282 | /* |
285 | * The clock could be going away for some time. Set the SDRAMs | 283 | * The clock could be going away for some time. Set the SDRAMs |
@@ -327,7 +325,7 @@ static int sa1110_target(struct cpufreq_policy *policy, | |||
327 | */ | 325 | */ |
328 | sdram_update_refresh(freqs.new, sdram); | 326 | sdram_update_refresh(freqs.new, sdram); |
329 | 327 | ||
330 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 328 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
331 | 329 | ||
332 | return 0; | 330 | return 0; |
333 | } | 331 | } |
diff --git a/drivers/cpufreq/sc520_freq.c b/drivers/cpufreq/sc520_freq.c index e42e073cd9b8..f740b134d27b 100644 --- a/drivers/cpufreq/sc520_freq.c +++ b/drivers/cpufreq/sc520_freq.c | |||
@@ -53,7 +53,8 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu) | |||
53 | } | 53 | } |
54 | } | 54 | } |
55 | 55 | ||
56 | static void sc520_freq_set_cpu_state(unsigned int state) | 56 | static void sc520_freq_set_cpu_state(struct cpufreq_policy *policy, |
57 | unsigned int state) | ||
57 | { | 58 | { |
58 | 59 | ||
59 | struct cpufreq_freqs freqs; | 60 | struct cpufreq_freqs freqs; |
@@ -61,9 +62,8 @@ static void sc520_freq_set_cpu_state(unsigned int state) | |||
61 | 62 | ||
62 | freqs.old = sc520_freq_get_cpu_frequency(0); | 63 | freqs.old = sc520_freq_get_cpu_frequency(0); |
63 | freqs.new = sc520_freq_table[state].frequency; | 64 | freqs.new = sc520_freq_table[state].frequency; |
64 | freqs.cpu = 0; /* AMD Elan is UP */ | ||
65 | 65 | ||
66 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 66 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
67 | 67 | ||
68 | pr_debug("attempting to set frequency to %i kHz\n", | 68 | pr_debug("attempting to set frequency to %i kHz\n", |
69 | sc520_freq_table[state].frequency); | 69 | sc520_freq_table[state].frequency); |
@@ -75,7 +75,7 @@ static void sc520_freq_set_cpu_state(unsigned int state) | |||
75 | 75 | ||
76 | local_irq_enable(); | 76 | local_irq_enable(); |
77 | 77 | ||
78 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 78 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
79 | }; | 79 | }; |
80 | 80 | ||
81 | static int sc520_freq_verify(struct cpufreq_policy *policy) | 81 | static int sc520_freq_verify(struct cpufreq_policy *policy) |
@@ -93,7 +93,7 @@ static int sc520_freq_target(struct cpufreq_policy *policy, | |||
93 | target_freq, relation, &newstate)) | 93 | target_freq, relation, &newstate)) |
94 | return -EINVAL; | 94 | return -EINVAL; |
95 | 95 | ||
96 | sc520_freq_set_cpu_state(newstate); | 96 | sc520_freq_set_cpu_state(policy, newstate); |
97 | 97 | ||
98 | return 0; | 98 | return 0; |
99 | } | 99 | } |
diff --git a/arch/sh/kernel/cpufreq.c b/drivers/cpufreq/sh-cpufreq.c index e68b45b6f3f9..73adb64651e8 100644 --- a/arch/sh/kernel/cpufreq.c +++ b/drivers/cpufreq/sh-cpufreq.c | |||
@@ -1,6 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * arch/sh/kernel/cpufreq.c | ||
3 | * | ||
4 | * cpufreq driver for the SuperH processors. | 2 | * cpufreq driver for the SuperH processors. |
5 | * | 3 | * |
6 | * Copyright (C) 2002 - 2012 Paul Mundt | 4 | * Copyright (C) 2002 - 2012 Paul Mundt |
@@ -51,9 +49,6 @@ static int sh_cpufreq_target(struct cpufreq_policy *policy, | |||
51 | struct device *dev; | 49 | struct device *dev; |
52 | long freq; | 50 | long freq; |
53 | 51 | ||
54 | if (!cpu_online(cpu)) | ||
55 | return -ENODEV; | ||
56 | |||
57 | cpus_allowed = current->cpus_allowed; | 52 | cpus_allowed = current->cpus_allowed; |
58 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 53 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
59 | 54 | ||
@@ -69,15 +64,14 @@ static int sh_cpufreq_target(struct cpufreq_policy *policy, | |||
69 | 64 | ||
70 | dev_dbg(dev, "requested frequency %u Hz\n", target_freq * 1000); | 65 | dev_dbg(dev, "requested frequency %u Hz\n", target_freq * 1000); |
71 | 66 | ||
72 | freqs.cpu = cpu; | ||
73 | freqs.old = sh_cpufreq_get(cpu); | 67 | freqs.old = sh_cpufreq_get(cpu); |
74 | freqs.new = (freq + 500) / 1000; | 68 | freqs.new = (freq + 500) / 1000; |
75 | freqs.flags = 0; | 69 | freqs.flags = 0; |
76 | 70 | ||
77 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 71 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
78 | set_cpus_allowed_ptr(current, &cpus_allowed); | 72 | set_cpus_allowed_ptr(current, &cpus_allowed); |
79 | clk_set_rate(cpuclk, freq); | 73 | clk_set_rate(cpuclk, freq); |
80 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 74 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
81 | 75 | ||
82 | dev_dbg(dev, "set frequency %lu Hz\n", freq); | 76 | dev_dbg(dev, "set frequency %lu Hz\n", freq); |
83 | 77 | ||
@@ -112,9 +106,6 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy) | |||
112 | struct cpufreq_frequency_table *freq_table; | 106 | struct cpufreq_frequency_table *freq_table; |
113 | struct device *dev; | 107 | struct device *dev; |
114 | 108 | ||
115 | if (!cpu_online(cpu)) | ||
116 | return -ENODEV; | ||
117 | |||
118 | dev = get_cpu_device(cpu); | 109 | dev = get_cpu_device(cpu); |
119 | 110 | ||
120 | cpuclk = clk_get(dev, "cpu_clk"); | 111 | cpuclk = clk_get(dev, "cpu_clk"); |
@@ -123,7 +114,7 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy) | |||
123 | return PTR_ERR(cpuclk); | 114 | return PTR_ERR(cpuclk); |
124 | } | 115 | } |
125 | 116 | ||
126 | policy->cur = policy->min = policy->max = sh_cpufreq_get(cpu); | 117 | policy->cur = sh_cpufreq_get(cpu); |
127 | 118 | ||
128 | freq_table = cpuclk->nr_freqs ? cpuclk->freq_table : NULL; | 119 | freq_table = cpuclk->nr_freqs ? cpuclk->freq_table : NULL; |
129 | if (freq_table) { | 120 | if (freq_table) { |
@@ -136,15 +127,12 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy) | |||
136 | dev_notice(dev, "no frequency table found, falling back " | 127 | dev_notice(dev, "no frequency table found, falling back " |
137 | "to rate rounding.\n"); | 128 | "to rate rounding.\n"); |
138 | 129 | ||
139 | policy->cpuinfo.min_freq = | 130 | policy->min = policy->cpuinfo.min_freq = |
140 | (clk_round_rate(cpuclk, 1) + 500) / 1000; | 131 | (clk_round_rate(cpuclk, 1) + 500) / 1000; |
141 | policy->cpuinfo.max_freq = | 132 | policy->max = policy->cpuinfo.max_freq = |
142 | (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; | 133 | (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; |
143 | } | 134 | } |
144 | 135 | ||
145 | policy->min = policy->cpuinfo.min_freq; | ||
146 | policy->max = policy->cpuinfo.max_freq; | ||
147 | |||
148 | policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; | 136 | policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; |
149 | 137 | ||
150 | dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, " | 138 | dev_info(dev, "CPU Frequencies - Minimum %u.%03u MHz, " |
diff --git a/arch/sparc/kernel/us2e_cpufreq.c b/drivers/cpufreq/sparc-us2e-cpufreq.c index 489fc15f3194..306ae462bba6 100644 --- a/arch/sparc/kernel/us2e_cpufreq.c +++ b/drivers/cpufreq/sparc-us2e-cpufreq.c | |||
@@ -234,9 +234,6 @@ static unsigned int us2e_freq_get(unsigned int cpu) | |||
234 | cpumask_t cpus_allowed; | 234 | cpumask_t cpus_allowed; |
235 | unsigned long clock_tick, estar; | 235 | unsigned long clock_tick, estar; |
236 | 236 | ||
237 | if (!cpu_online(cpu)) | ||
238 | return 0; | ||
239 | |||
240 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); | 237 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); |
241 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 238 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
242 | 239 | ||
@@ -248,16 +245,15 @@ static unsigned int us2e_freq_get(unsigned int cpu) | |||
248 | return clock_tick / estar_to_divisor(estar); | 245 | return clock_tick / estar_to_divisor(estar); |
249 | } | 246 | } |
250 | 247 | ||
251 | static void us2e_set_cpu_divider_index(unsigned int cpu, unsigned int index) | 248 | static void us2e_set_cpu_divider_index(struct cpufreq_policy *policy, |
249 | unsigned int index) | ||
252 | { | 250 | { |
251 | unsigned int cpu = policy->cpu; | ||
253 | unsigned long new_bits, new_freq; | 252 | unsigned long new_bits, new_freq; |
254 | unsigned long clock_tick, divisor, old_divisor, estar; | 253 | unsigned long clock_tick, divisor, old_divisor, estar; |
255 | cpumask_t cpus_allowed; | 254 | cpumask_t cpus_allowed; |
256 | struct cpufreq_freqs freqs; | 255 | struct cpufreq_freqs freqs; |
257 | 256 | ||
258 | if (!cpu_online(cpu)) | ||
259 | return; | ||
260 | |||
261 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); | 257 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); |
262 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 258 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
263 | 259 | ||
@@ -272,14 +268,13 @@ static void us2e_set_cpu_divider_index(unsigned int cpu, unsigned int index) | |||
272 | 268 | ||
273 | freqs.old = clock_tick / old_divisor; | 269 | freqs.old = clock_tick / old_divisor; |
274 | freqs.new = new_freq; | 270 | freqs.new = new_freq; |
275 | freqs.cpu = cpu; | 271 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
276 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
277 | 272 | ||
278 | if (old_divisor != divisor) | 273 | if (old_divisor != divisor) |
279 | us2e_transition(estar, new_bits, clock_tick * 1000, | 274 | us2e_transition(estar, new_bits, clock_tick * 1000, |
280 | old_divisor, divisor); | 275 | old_divisor, divisor); |
281 | 276 | ||
282 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 277 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
283 | 278 | ||
284 | set_cpus_allowed_ptr(current, &cpus_allowed); | 279 | set_cpus_allowed_ptr(current, &cpus_allowed); |
285 | } | 280 | } |
@@ -295,7 +290,7 @@ static int us2e_freq_target(struct cpufreq_policy *policy, | |||
295 | target_freq, relation, &new_index)) | 290 | target_freq, relation, &new_index)) |
296 | return -EINVAL; | 291 | return -EINVAL; |
297 | 292 | ||
298 | us2e_set_cpu_divider_index(policy->cpu, new_index); | 293 | us2e_set_cpu_divider_index(policy, new_index); |
299 | 294 | ||
300 | return 0; | 295 | return 0; |
301 | } | 296 | } |
@@ -335,7 +330,7 @@ static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy) | |||
335 | static int us2e_freq_cpu_exit(struct cpufreq_policy *policy) | 330 | static int us2e_freq_cpu_exit(struct cpufreq_policy *policy) |
336 | { | 331 | { |
337 | if (cpufreq_us2e_driver) | 332 | if (cpufreq_us2e_driver) |
338 | us2e_set_cpu_divider_index(policy->cpu, 0); | 333 | us2e_set_cpu_divider_index(policy, 0); |
339 | 334 | ||
340 | return 0; | 335 | return 0; |
341 | } | 336 | } |
diff --git a/arch/sparc/kernel/us3_cpufreq.c b/drivers/cpufreq/sparc-us3-cpufreq.c index eb1624b931d9..c71ee142347a 100644 --- a/arch/sparc/kernel/us3_cpufreq.c +++ b/drivers/cpufreq/sparc-us3-cpufreq.c | |||
@@ -82,9 +82,6 @@ static unsigned int us3_freq_get(unsigned int cpu) | |||
82 | unsigned long reg; | 82 | unsigned long reg; |
83 | unsigned int ret; | 83 | unsigned int ret; |
84 | 84 | ||
85 | if (!cpu_online(cpu)) | ||
86 | return 0; | ||
87 | |||
88 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); | 85 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); |
89 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 86 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
90 | 87 | ||
@@ -96,15 +93,14 @@ static unsigned int us3_freq_get(unsigned int cpu) | |||
96 | return ret; | 93 | return ret; |
97 | } | 94 | } |
98 | 95 | ||
99 | static void us3_set_cpu_divider_index(unsigned int cpu, unsigned int index) | 96 | static void us3_set_cpu_divider_index(struct cpufreq_policy *policy, |
97 | unsigned int index) | ||
100 | { | 98 | { |
99 | unsigned int cpu = policy->cpu; | ||
101 | unsigned long new_bits, new_freq, reg; | 100 | unsigned long new_bits, new_freq, reg; |
102 | cpumask_t cpus_allowed; | 101 | cpumask_t cpus_allowed; |
103 | struct cpufreq_freqs freqs; | 102 | struct cpufreq_freqs freqs; |
104 | 103 | ||
105 | if (!cpu_online(cpu)) | ||
106 | return; | ||
107 | |||
108 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); | 104 | cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current)); |
109 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); | 105 | set_cpus_allowed_ptr(current, cpumask_of(cpu)); |
110 | 106 | ||
@@ -131,14 +127,13 @@ static void us3_set_cpu_divider_index(unsigned int cpu, unsigned int index) | |||
131 | 127 | ||
132 | freqs.old = get_current_freq(cpu, reg); | 128 | freqs.old = get_current_freq(cpu, reg); |
133 | freqs.new = new_freq; | 129 | freqs.new = new_freq; |
134 | freqs.cpu = cpu; | 130 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
135 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
136 | 131 | ||
137 | reg &= ~SAFARI_CFG_DIV_MASK; | 132 | reg &= ~SAFARI_CFG_DIV_MASK; |
138 | reg |= new_bits; | 133 | reg |= new_bits; |
139 | write_safari_cfg(reg); | 134 | write_safari_cfg(reg); |
140 | 135 | ||
141 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 136 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
142 | 137 | ||
143 | set_cpus_allowed_ptr(current, &cpus_allowed); | 138 | set_cpus_allowed_ptr(current, &cpus_allowed); |
144 | } | 139 | } |
@@ -156,7 +151,7 @@ static int us3_freq_target(struct cpufreq_policy *policy, | |||
156 | &new_index)) | 151 | &new_index)) |
157 | return -EINVAL; | 152 | return -EINVAL; |
158 | 153 | ||
159 | us3_set_cpu_divider_index(policy->cpu, new_index); | 154 | us3_set_cpu_divider_index(policy, new_index); |
160 | 155 | ||
161 | return 0; | 156 | return 0; |
162 | } | 157 | } |
@@ -192,7 +187,7 @@ static int __init us3_freq_cpu_init(struct cpufreq_policy *policy) | |||
192 | static int us3_freq_cpu_exit(struct cpufreq_policy *policy) | 187 | static int us3_freq_cpu_exit(struct cpufreq_policy *policy) |
193 | { | 188 | { |
194 | if (cpufreq_us3_driver) | 189 | if (cpufreq_us3_driver) |
195 | us3_set_cpu_divider_index(policy->cpu, 0); | 190 | us3_set_cpu_divider_index(policy, 0); |
196 | 191 | ||
197 | return 0; | 192 | return 0; |
198 | } | 193 | } |
diff --git a/drivers/cpufreq/spear-cpufreq.c b/drivers/cpufreq/spear-cpufreq.c index 7e4d77327957..156829f4576d 100644 --- a/drivers/cpufreq/spear-cpufreq.c +++ b/drivers/cpufreq/spear-cpufreq.c | |||
@@ -121,7 +121,6 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy, | |||
121 | target_freq, relation, &index)) | 121 | target_freq, relation, &index)) |
122 | return -EINVAL; | 122 | return -EINVAL; |
123 | 123 | ||
124 | freqs.cpu = policy->cpu; | ||
125 | freqs.old = spear_cpufreq_get(0); | 124 | freqs.old = spear_cpufreq_get(0); |
126 | 125 | ||
127 | newfreq = spear_cpufreq.freq_tbl[index].frequency * 1000; | 126 | newfreq = spear_cpufreq.freq_tbl[index].frequency * 1000; |
@@ -158,8 +157,7 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy, | |||
158 | freqs.new = newfreq / 1000; | 157 | freqs.new = newfreq / 1000; |
159 | freqs.new /= mult; | 158 | freqs.new /= mult; |
160 | 159 | ||
161 | for_each_cpu(freqs.cpu, policy->cpus) | 160 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
162 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
163 | 161 | ||
164 | if (mult == 2) | 162 | if (mult == 2) |
165 | ret = spear1340_set_cpu_rate(srcclk, newfreq); | 163 | ret = spear1340_set_cpu_rate(srcclk, newfreq); |
@@ -172,8 +170,7 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy, | |||
172 | freqs.new = clk_get_rate(spear_cpufreq.clk) / 1000; | 170 | freqs.new = clk_get_rate(spear_cpufreq.clk) / 1000; |
173 | } | 171 | } |
174 | 172 | ||
175 | for_each_cpu(freqs.cpu, policy->cpus) | 173 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
176 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
177 | return ret; | 174 | return ret; |
178 | } | 175 | } |
179 | 176 | ||
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c index 3a953d519f46..618e6f417b1c 100644 --- a/drivers/cpufreq/speedstep-centrino.c +++ b/drivers/cpufreq/speedstep-centrino.c | |||
@@ -457,7 +457,7 @@ static int centrino_target (struct cpufreq_policy *policy, | |||
457 | unsigned int msr, oldmsr = 0, h = 0, cpu = policy->cpu; | 457 | unsigned int msr, oldmsr = 0, h = 0, cpu = policy->cpu; |
458 | struct cpufreq_freqs freqs; | 458 | struct cpufreq_freqs freqs; |
459 | int retval = 0; | 459 | int retval = 0; |
460 | unsigned int j, k, first_cpu, tmp; | 460 | unsigned int j, first_cpu, tmp; |
461 | cpumask_var_t covered_cpus; | 461 | cpumask_var_t covered_cpus; |
462 | 462 | ||
463 | if (unlikely(!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL))) | 463 | if (unlikely(!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL))) |
@@ -481,10 +481,6 @@ static int centrino_target (struct cpufreq_policy *policy, | |||
481 | for_each_cpu(j, policy->cpus) { | 481 | for_each_cpu(j, policy->cpus) { |
482 | int good_cpu; | 482 | int good_cpu; |
483 | 483 | ||
484 | /* cpufreq holds the hotplug lock, so we are safe here */ | ||
485 | if (!cpu_online(j)) | ||
486 | continue; | ||
487 | |||
488 | /* | 484 | /* |
489 | * Support for SMP systems. | 485 | * Support for SMP systems. |
490 | * Make sure we are running on CPU that wants to change freq | 486 | * Make sure we are running on CPU that wants to change freq |
@@ -522,13 +518,8 @@ static int centrino_target (struct cpufreq_policy *policy, | |||
522 | pr_debug("target=%dkHz old=%d new=%d msr=%04x\n", | 518 | pr_debug("target=%dkHz old=%d new=%d msr=%04x\n", |
523 | target_freq, freqs.old, freqs.new, msr); | 519 | target_freq, freqs.old, freqs.new, msr); |
524 | 520 | ||
525 | for_each_cpu(k, policy->cpus) { | 521 | cpufreq_notify_transition(policy, &freqs, |
526 | if (!cpu_online(k)) | ||
527 | continue; | ||
528 | freqs.cpu = k; | ||
529 | cpufreq_notify_transition(&freqs, | ||
530 | CPUFREQ_PRECHANGE); | 522 | CPUFREQ_PRECHANGE); |
531 | } | ||
532 | 523 | ||
533 | first_cpu = 0; | 524 | first_cpu = 0; |
534 | /* all but 16 LSB are reserved, treat them with care */ | 525 | /* all but 16 LSB are reserved, treat them with care */ |
@@ -544,12 +535,7 @@ static int centrino_target (struct cpufreq_policy *policy, | |||
544 | cpumask_set_cpu(j, covered_cpus); | 535 | cpumask_set_cpu(j, covered_cpus); |
545 | } | 536 | } |
546 | 537 | ||
547 | for_each_cpu(k, policy->cpus) { | 538 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
548 | if (!cpu_online(k)) | ||
549 | continue; | ||
550 | freqs.cpu = k; | ||
551 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
552 | } | ||
553 | 539 | ||
554 | if (unlikely(retval)) { | 540 | if (unlikely(retval)) { |
555 | /* | 541 | /* |
@@ -565,12 +551,8 @@ static int centrino_target (struct cpufreq_policy *policy, | |||
565 | tmp = freqs.new; | 551 | tmp = freqs.new; |
566 | freqs.new = freqs.old; | 552 | freqs.new = freqs.old; |
567 | freqs.old = tmp; | 553 | freqs.old = tmp; |
568 | for_each_cpu(j, policy->cpus) { | 554 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
569 | if (!cpu_online(j)) | 555 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
570 | continue; | ||
571 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
572 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
573 | } | ||
574 | } | 556 | } |
575 | retval = 0; | 557 | retval = 0; |
576 | 558 | ||
diff --git a/drivers/cpufreq/speedstep-ich.c b/drivers/cpufreq/speedstep-ich.c index e29b59aa68a8..e2e5aa971452 100644 --- a/drivers/cpufreq/speedstep-ich.c +++ b/drivers/cpufreq/speedstep-ich.c | |||
@@ -263,7 +263,6 @@ static int speedstep_target(struct cpufreq_policy *policy, | |||
263 | { | 263 | { |
264 | unsigned int newstate = 0, policy_cpu; | 264 | unsigned int newstate = 0, policy_cpu; |
265 | struct cpufreq_freqs freqs; | 265 | struct cpufreq_freqs freqs; |
266 | int i; | ||
267 | 266 | ||
268 | if (cpufreq_frequency_table_target(policy, &speedstep_freqs[0], | 267 | if (cpufreq_frequency_table_target(policy, &speedstep_freqs[0], |
269 | target_freq, relation, &newstate)) | 268 | target_freq, relation, &newstate)) |
@@ -272,7 +271,6 @@ static int speedstep_target(struct cpufreq_policy *policy, | |||
272 | policy_cpu = cpumask_any_and(policy->cpus, cpu_online_mask); | 271 | policy_cpu = cpumask_any_and(policy->cpus, cpu_online_mask); |
273 | freqs.old = speedstep_get(policy_cpu); | 272 | freqs.old = speedstep_get(policy_cpu); |
274 | freqs.new = speedstep_freqs[newstate].frequency; | 273 | freqs.new = speedstep_freqs[newstate].frequency; |
275 | freqs.cpu = policy->cpu; | ||
276 | 274 | ||
277 | pr_debug("transiting from %u to %u kHz\n", freqs.old, freqs.new); | 275 | pr_debug("transiting from %u to %u kHz\n", freqs.old, freqs.new); |
278 | 276 | ||
@@ -280,18 +278,12 @@ static int speedstep_target(struct cpufreq_policy *policy, | |||
280 | if (freqs.old == freqs.new) | 278 | if (freqs.old == freqs.new) |
281 | return 0; | 279 | return 0; |
282 | 280 | ||
283 | for_each_cpu(i, policy->cpus) { | 281 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
284 | freqs.cpu = i; | ||
285 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
286 | } | ||
287 | 282 | ||
288 | smp_call_function_single(policy_cpu, _speedstep_set_state, &newstate, | 283 | smp_call_function_single(policy_cpu, _speedstep_set_state, &newstate, |
289 | true); | 284 | true); |
290 | 285 | ||
291 | for_each_cpu(i, policy->cpus) { | 286 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
292 | freqs.cpu = i; | ||
293 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
294 | } | ||
295 | 287 | ||
296 | return 0; | 288 | return 0; |
297 | } | 289 | } |
diff --git a/drivers/cpufreq/speedstep-smi.c b/drivers/cpufreq/speedstep-smi.c index 6a457fcaaad5..f5a6b70ee6c0 100644 --- a/drivers/cpufreq/speedstep-smi.c +++ b/drivers/cpufreq/speedstep-smi.c | |||
@@ -252,14 +252,13 @@ static int speedstep_target(struct cpufreq_policy *policy, | |||
252 | 252 | ||
253 | freqs.old = speedstep_freqs[speedstep_get_state()].frequency; | 253 | freqs.old = speedstep_freqs[speedstep_get_state()].frequency; |
254 | freqs.new = speedstep_freqs[newstate].frequency; | 254 | freqs.new = speedstep_freqs[newstate].frequency; |
255 | freqs.cpu = 0; /* speedstep.c is UP only driver */ | ||
256 | 255 | ||
257 | if (freqs.old == freqs.new) | 256 | if (freqs.old == freqs.new) |
258 | return 0; | 257 | return 0; |
259 | 258 | ||
260 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 259 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
261 | speedstep_set_state(newstate); | 260 | speedstep_set_state(newstate); |
262 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 261 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
263 | 262 | ||
264 | return 0; | 263 | return 0; |
265 | } | 264 | } |
diff --git a/arch/arm/mach-tegra/cpu-tegra.c b/drivers/cpufreq/tegra-cpufreq.c index e3d6e15ff188..c74c0e130ef4 100644 --- a/arch/arm/mach-tegra/cpu-tegra.c +++ b/drivers/cpufreq/tegra-cpufreq.c | |||
@@ -1,6 +1,4 @@ | |||
1 | /* | 1 | /* |
2 | * arch/arm/mach-tegra/cpu-tegra.c | ||
3 | * | ||
4 | * Copyright (C) 2010 Google, Inc. | 2 | * Copyright (C) 2010 Google, Inc. |
5 | * | 3 | * |
6 | * Author: | 4 | * Author: |
@@ -106,7 +104,8 @@ out: | |||
106 | return ret; | 104 | return ret; |
107 | } | 105 | } |
108 | 106 | ||
109 | static int tegra_update_cpu_speed(unsigned long rate) | 107 | static int tegra_update_cpu_speed(struct cpufreq_policy *policy, |
108 | unsigned long rate) | ||
110 | { | 109 | { |
111 | int ret = 0; | 110 | int ret = 0; |
112 | struct cpufreq_freqs freqs; | 111 | struct cpufreq_freqs freqs; |
@@ -128,8 +127,7 @@ static int tegra_update_cpu_speed(unsigned long rate) | |||
128 | else | 127 | else |
129 | clk_set_rate(emc_clk, 100000000); /* emc 50Mhz */ | 128 | clk_set_rate(emc_clk, 100000000); /* emc 50Mhz */ |
130 | 129 | ||
131 | for_each_online_cpu(freqs.cpu) | 130 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
132 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | ||
133 | 131 | ||
134 | #ifdef CONFIG_CPU_FREQ_DEBUG | 132 | #ifdef CONFIG_CPU_FREQ_DEBUG |
135 | printk(KERN_DEBUG "cpufreq-tegra: transition: %u --> %u\n", | 133 | printk(KERN_DEBUG "cpufreq-tegra: transition: %u --> %u\n", |
@@ -143,8 +141,7 @@ static int tegra_update_cpu_speed(unsigned long rate) | |||
143 | return ret; | 141 | return ret; |
144 | } | 142 | } |
145 | 143 | ||
146 | for_each_online_cpu(freqs.cpu) | 144 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
147 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | ||
148 | 145 | ||
149 | return 0; | 146 | return 0; |
150 | } | 147 | } |
@@ -181,7 +178,7 @@ static int tegra_target(struct cpufreq_policy *policy, | |||
181 | 178 | ||
182 | target_cpu_speed[policy->cpu] = freq; | 179 | target_cpu_speed[policy->cpu] = freq; |
183 | 180 | ||
184 | ret = tegra_update_cpu_speed(tegra_cpu_highest_speed()); | 181 | ret = tegra_update_cpu_speed(policy, tegra_cpu_highest_speed()); |
185 | 182 | ||
186 | out: | 183 | out: |
187 | mutex_unlock(&tegra_cpu_lock); | 184 | mutex_unlock(&tegra_cpu_lock); |
@@ -193,10 +190,12 @@ static int tegra_pm_notify(struct notifier_block *nb, unsigned long event, | |||
193 | { | 190 | { |
194 | mutex_lock(&tegra_cpu_lock); | 191 | mutex_lock(&tegra_cpu_lock); |
195 | if (event == PM_SUSPEND_PREPARE) { | 192 | if (event == PM_SUSPEND_PREPARE) { |
193 | struct cpufreq_policy *policy = cpufreq_cpu_get(0); | ||
196 | is_suspended = true; | 194 | is_suspended = true; |
197 | pr_info("Tegra cpufreq suspend: setting frequency to %d kHz\n", | 195 | pr_info("Tegra cpufreq suspend: setting frequency to %d kHz\n", |
198 | freq_table[0].frequency); | 196 | freq_table[0].frequency); |
199 | tegra_update_cpu_speed(freq_table[0].frequency); | 197 | tegra_update_cpu_speed(policy, freq_table[0].frequency); |
198 | cpufreq_cpu_put(policy); | ||
200 | } else if (event == PM_POST_SUSPEND) { | 199 | } else if (event == PM_POST_SUSPEND) { |
201 | is_suspended = false; | 200 | is_suspended = false; |
202 | } | 201 | } |
diff --git a/arch/unicore32/kernel/cpu-ucv2.c b/drivers/cpufreq/unicore2-cpufreq.c index 4a99f62584c7..12fc904d7dab 100644 --- a/arch/unicore32/kernel/cpu-ucv2.c +++ b/drivers/cpufreq/unicore2-cpufreq.c | |||
@@ -1,5 +1,5 @@ | |||
1 | /* | 1 | /* |
2 | * linux/arch/unicore32/kernel/cpu-ucv2.c: clock scaling for the UniCore-II | 2 | * clock scaling for the UniCore-II |
3 | * | 3 | * |
4 | * Code specific to PKUnity SoC and UniCore ISA | 4 | * Code specific to PKUnity SoC and UniCore ISA |
5 | * | 5 | * |
@@ -52,15 +52,14 @@ static int ucv2_target(struct cpufreq_policy *policy, | |||
52 | struct cpufreq_freqs freqs; | 52 | struct cpufreq_freqs freqs; |
53 | struct clk *mclk = clk_get(NULL, "MAIN_CLK"); | 53 | struct clk *mclk = clk_get(NULL, "MAIN_CLK"); |
54 | 54 | ||
55 | cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); | 55 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); |
56 | 56 | ||
57 | if (!clk_set_rate(mclk, target_freq * 1000)) { | 57 | if (!clk_set_rate(mclk, target_freq * 1000)) { |
58 | freqs.old = cur; | 58 | freqs.old = cur; |
59 | freqs.new = target_freq; | 59 | freqs.new = target_freq; |
60 | freqs.cpu = 0; | ||
61 | } | 60 | } |
62 | 61 | ||
63 | cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); | 62 | cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); |
64 | 63 | ||
65 | return 0; | 64 | return 0; |
66 | } | 65 | } |
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index a22944ca0526..037d36ae63e5 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h | |||
@@ -106,6 +106,7 @@ struct cpufreq_policy { | |||
106 | * governors are used */ | 106 | * governors are used */ |
107 | unsigned int policy; /* see above */ | 107 | unsigned int policy; /* see above */ |
108 | struct cpufreq_governor *governor; /* see below */ | 108 | struct cpufreq_governor *governor; /* see below */ |
109 | void *governor_data; | ||
109 | 110 | ||
110 | struct work_struct update; /* if update_policy() needs to be | 111 | struct work_struct update; /* if update_policy() needs to be |
111 | * called, but you're in IRQ context */ | 112 | * called, but you're in IRQ context */ |
@@ -178,9 +179,11 @@ static inline unsigned long cpufreq_scale(unsigned long old, u_int div, u_int mu | |||
178 | * CPUFREQ GOVERNORS * | 179 | * CPUFREQ GOVERNORS * |
179 | *********************************************************************/ | 180 | *********************************************************************/ |
180 | 181 | ||
181 | #define CPUFREQ_GOV_START 1 | 182 | #define CPUFREQ_GOV_START 1 |
182 | #define CPUFREQ_GOV_STOP 2 | 183 | #define CPUFREQ_GOV_STOP 2 |
183 | #define CPUFREQ_GOV_LIMITS 3 | 184 | #define CPUFREQ_GOV_LIMITS 3 |
185 | #define CPUFREQ_GOV_POLICY_INIT 4 | ||
186 | #define CPUFREQ_GOV_POLICY_EXIT 5 | ||
184 | 187 | ||
185 | struct cpufreq_governor { | 188 | struct cpufreq_governor { |
186 | char name[CPUFREQ_NAME_LEN]; | 189 | char name[CPUFREQ_NAME_LEN]; |
@@ -229,6 +232,13 @@ struct cpufreq_driver { | |||
229 | struct module *owner; | 232 | struct module *owner; |
230 | char name[CPUFREQ_NAME_LEN]; | 233 | char name[CPUFREQ_NAME_LEN]; |
231 | u8 flags; | 234 | u8 flags; |
235 | /* | ||
236 | * This should be set by platforms having multiple clock-domains, i.e. | ||
237 | * supporting multiple policies. With this sysfs directories of governor | ||
238 | * would be created in cpu/cpu<num>/cpufreq/ directory and so they can | ||
239 | * use the same governor with different tunables for different clusters. | ||
240 | */ | ||
241 | bool have_governor_per_policy; | ||
232 | 242 | ||
233 | /* needed by all drivers */ | 243 | /* needed by all drivers */ |
234 | int (*init) (struct cpufreq_policy *policy); | 244 | int (*init) (struct cpufreq_policy *policy); |
@@ -268,8 +278,8 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data); | |||
268 | int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); | 278 | int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); |
269 | 279 | ||
270 | 280 | ||
271 | void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state); | 281 | void cpufreq_notify_transition(struct cpufreq_policy *policy, |
272 | 282 | struct cpufreq_freqs *freqs, unsigned int state); | |
273 | 283 | ||
274 | static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned int min, unsigned int max) | 284 | static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned int min, unsigned int max) |
275 | { | 285 | { |
@@ -329,6 +339,7 @@ const char *cpufreq_get_current_driver(void); | |||
329 | *********************************************************************/ | 339 | *********************************************************************/ |
330 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); | 340 | int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); |
331 | int cpufreq_update_policy(unsigned int cpu); | 341 | int cpufreq_update_policy(unsigned int cpu); |
342 | bool have_governor_per_policy(void); | ||
332 | 343 | ||
333 | #ifdef CONFIG_CPU_FREQ | 344 | #ifdef CONFIG_CPU_FREQ |
334 | /* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ | 345 | /* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ |