diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-07-04 16:39:41 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-07-04 16:39:41 -0400 |
commit | 408c9861c6979db974455b9e7a9bcadd60e0934c (patch) | |
tree | 9bdb862da2883cd4f74297d01ec8ce3b4619dd66 /drivers/cpufreq/intel_pstate.c | |
parent | b39de277b02ffd8e3dccb01e9159bd45cb07b95d (diff) | |
parent | 8f8e5c3e2796eaf150d6262115af12707c2616dd (diff) |
Merge tag 'pm-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"The big ticket items here are the rework of suspend-to-idle in order
to add proper support for power button wakeup from it on recent Dell
laptops and the rework of interfaces exporting the current CPU
frequency on x86.
In addition to that, support for a few new pieces of hardware is
added, the PCI/ACPI device wakeup infrastructure is simplified
significantly and the wakeup IRQ framework is fixed to unbreak the IRQ
bus locking infrastructure.
Also, there are some functional improvements for intel_pstate, tools
updates and small fixes and cleanups all over.
Specifics:
- Rework suspend-to-idle to allow it to take wakeup events signaled
by the EC into account on ACPI-based platforms in order to properly
support power button wakeup from suspend-to-idle on recent Dell
laptops (Rafael Wysocki).
That includes the core suspend-to-idle code rework, support for the
Low Power S0 _DSM interface, and support for the ACPI INT0002
Virtual GPIO device from Hans de Goede (required for USB keyboard
wakeup from suspend-to-idle to work on some machines).
- Stop trying to export the current CPU frequency via /proc/cpuinfo
on x86 as that is inaccurate and confusing (Len Brown).
- Rework the way in which the current CPU frequency is exported by
the kernel (over the cpufreq sysfs interface) on x86 systems with
the APERF and MPERF registers by always using values read from
these registers, when available, to compute the current frequency
regardless of which cpufreq driver is in use (Len Brown).
- Rework the PCI/ACPI device wakeup infrastructure to remove the
questionable and artificial distinction between "devices that can
wake up the system from sleep states" and "devices that can
generate wakeup signals in the working state" from it, which allows
the code to be simplified quite a bit (Rafael Wysocki).
- Fix the wakeup IRQ framework by making it use SRCU instead of RCU
which doesn't allow sleeping in the read-side critical sections,
but which in turn is expected to be allowed by the IRQ bus locking
infrastructure (Thomas Gleixner).
- Modify some computations in the intel_pstate driver to avoid
rounding errors resulting from them (Srinivas Pandruvada).
- Reduce the overhead of the intel_pstate driver in the HWP
(hardware-managed P-states) mode and when the "performance" P-state
selection algorithm is in use by making it avoid registering
scheduler callbacks in those cases (Len Brown).
- Rework the energy_performance_preference sysfs knob in intel_pstate
by changing the values that correspond to different symbolic hint
names used by it (Len Brown).
- Make it possible to use more than one cpuidle driver at the same
time on ARM (Daniel Lezcano).
- Make it possible to prevent the cpuidle menu governor from using
the 0 state by disabling it via sysfs (Nicholas Piggin).
- Add support for FFH (Fixed Functional Hardware) MWAIT in ACPI C1 on
AMD systems (Yazen Ghannam).
- Make the CPPC cpufreq driver take the lowest nonlinear performance
information into account (Prashanth Prakash).
- Add support for hi3660 to the cpufreq-dt driver, fix the imx6q
driver and clean up the sfi, exynos5440 and intel_pstate drivers
(Colin Ian King, Krzysztof Kozlowski, Octavian Purdila, Rafael
Wysocki, Tao Wang).
- Fix a few minor issues in the generic power domains (genpd)
framework and clean it up somewhat (Krzysztof Kozlowski, Mikko
Perttunen, Viresh Kumar).
- Fix a couple of minor issues in the operating performance points
(OPP) framework and clean it up somewhat (Viresh Kumar).
- Fix a CONFIG dependency in the hibernation core and clean it up
slightly (Balbir Singh, Arvind Yadav, BaoJun Luo).
- Add rk3228 support to the rockchip-io adaptive voltage scaling
(AVS) driver (David Wu).
- Fix an incorrect bit shift operation in the RAPL power capping
driver (Adam Lessnau).
- Add support for the EPP field in the HWP (hardware managed
P-states) control register, HWP.EPP, to the x86_energy_perf_policy
tool and update msr-index.h with HWP.EPP values (Len Brown).
- Fix some minor issues in the turbostat tool (Len Brown).
- Add support for AMD family 0x17 CPUs to the cpupower tool and fix a
minor issue in it (Sherry Hurwitz).
- Assorted cleanups, mostly related to the constification of some
data structures (Arvind Yadav, Joe Perches, Kees Cook, Krzysztof
Kozlowski)"
* tag 'pm-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (69 commits)
cpufreq: Update scaling_cur_freq documentation
cpufreq: intel_pstate: Clean up after performance governor changes
PM: hibernate: constify attribute_group structures.
cpuidle: menu: allow state 0 to be disabled
intel_idle: Use more common logging style
PM / Domains: Fix missing default_power_down_ok comment
PM / Domains: Fix unsafe iteration over modified list of domains
PM / Domains: Fix unsafe iteration over modified list of domain providers
PM / Domains: Fix unsafe iteration over modified list of device links
PM / Domains: Handle safely genpd_syscore_switch() call on non-genpd device
PM / Domains: Call driver's noirq callbacks
PM / core: Drop run_wake flag from struct dev_pm_info
PCI / PM: Simplify device wakeup settings code
PCI / PM: Drop pme_interrupt flag from struct pci_dev
ACPI / PM: Consolidate device wakeup settings code
ACPI / PM: Drop run_wake from struct acpi_device_wakeup_flags
PM / QoS: constify *_attribute_group.
PM / AVS: rockchip-io: add io selectors and supplies for rk3228
powercap/RAPL: prevent overridding bits outside of the mask
PM / sysfs: Constify attribute groups
...
Diffstat (limited to 'drivers/cpufreq/intel_pstate.c')
-rw-r--r-- | drivers/cpufreq/intel_pstate.c | 172 |
1 files changed, 83 insertions, 89 deletions
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index eb1158532de3..48a98f11a84e 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c | |||
@@ -231,10 +231,8 @@ struct global_params { | |||
231 | * @prev_cummulative_iowait: IO Wait time difference from last and | 231 | * @prev_cummulative_iowait: IO Wait time difference from last and |
232 | * current sample | 232 | * current sample |
233 | * @sample: Storage for storing last Sample data | 233 | * @sample: Storage for storing last Sample data |
234 | * @min_perf: Minimum capacity limit as a fraction of the maximum | 234 | * @min_perf_ratio: Minimum capacity in terms of PERF or HWP ratios |
235 | * turbo P-state capacity. | 235 | * @max_perf_ratio: Maximum capacity in terms of PERF or HWP ratios |
236 | * @max_perf: Maximum capacity limit as a fraction of the maximum | ||
237 | * turbo P-state capacity. | ||
238 | * @acpi_perf_data: Stores ACPI perf information read from _PSS | 236 | * @acpi_perf_data: Stores ACPI perf information read from _PSS |
239 | * @valid_pss_table: Set to true for valid ACPI _PSS entries found | 237 | * @valid_pss_table: Set to true for valid ACPI _PSS entries found |
240 | * @epp_powersave: Last saved HWP energy performance preference | 238 | * @epp_powersave: Last saved HWP energy performance preference |
@@ -266,8 +264,8 @@ struct cpudata { | |||
266 | u64 prev_tsc; | 264 | u64 prev_tsc; |
267 | u64 prev_cummulative_iowait; | 265 | u64 prev_cummulative_iowait; |
268 | struct sample sample; | 266 | struct sample sample; |
269 | int32_t min_perf; | 267 | int32_t min_perf_ratio; |
270 | int32_t max_perf; | 268 | int32_t max_perf_ratio; |
271 | #ifdef CONFIG_ACPI | 269 | #ifdef CONFIG_ACPI |
272 | struct acpi_processor_performance acpi_perf_data; | 270 | struct acpi_processor_performance acpi_perf_data; |
273 | bool valid_pss_table; | 271 | bool valid_pss_table; |
@@ -653,6 +651,12 @@ static const char * const energy_perf_strings[] = { | |||
653 | "power", | 651 | "power", |
654 | NULL | 652 | NULL |
655 | }; | 653 | }; |
654 | static const unsigned int epp_values[] = { | ||
655 | HWP_EPP_PERFORMANCE, | ||
656 | HWP_EPP_BALANCE_PERFORMANCE, | ||
657 | HWP_EPP_BALANCE_POWERSAVE, | ||
658 | HWP_EPP_POWERSAVE | ||
659 | }; | ||
656 | 660 | ||
657 | static int intel_pstate_get_energy_pref_index(struct cpudata *cpu_data) | 661 | static int intel_pstate_get_energy_pref_index(struct cpudata *cpu_data) |
658 | { | 662 | { |
@@ -664,17 +668,14 @@ static int intel_pstate_get_energy_pref_index(struct cpudata *cpu_data) | |||
664 | return epp; | 668 | return epp; |
665 | 669 | ||
666 | if (static_cpu_has(X86_FEATURE_HWP_EPP)) { | 670 | if (static_cpu_has(X86_FEATURE_HWP_EPP)) { |
667 | /* | 671 | if (epp == HWP_EPP_PERFORMANCE) |
668 | * Range: | 672 | return 1; |
669 | * 0x00-0x3F : Performance | 673 | if (epp <= HWP_EPP_BALANCE_PERFORMANCE) |
670 | * 0x40-0x7F : Balance performance | 674 | return 2; |
671 | * 0x80-0xBF : Balance power | 675 | if (epp <= HWP_EPP_BALANCE_POWERSAVE) |
672 | * 0xC0-0xFF : Power | 676 | return 3; |
673 | * The EPP is a 8 bit value, but our ranges restrict the | 677 | else |
674 | * value which can be set. Here only using top two bits | 678 | return 4; |
675 | * effectively. | ||
676 | */ | ||
677 | index = (epp >> 6) + 1; | ||
678 | } else if (static_cpu_has(X86_FEATURE_EPB)) { | 679 | } else if (static_cpu_has(X86_FEATURE_EPB)) { |
679 | /* | 680 | /* |
680 | * Range: | 681 | * Range: |
@@ -712,15 +713,8 @@ static int intel_pstate_set_energy_pref_index(struct cpudata *cpu_data, | |||
712 | 713 | ||
713 | value &= ~GENMASK_ULL(31, 24); | 714 | value &= ~GENMASK_ULL(31, 24); |
714 | 715 | ||
715 | /* | ||
716 | * If epp is not default, convert from index into | ||
717 | * energy_perf_strings to epp value, by shifting 6 | ||
718 | * bits left to use only top two bits in epp. | ||
719 | * The resultant epp need to shifted by 24 bits to | ||
720 | * epp position in MSR_HWP_REQUEST. | ||
721 | */ | ||
722 | if (epp == -EINVAL) | 716 | if (epp == -EINVAL) |
723 | epp = (pref_index - 1) << 6; | 717 | epp = epp_values[pref_index - 1]; |
724 | 718 | ||
725 | value |= (u64)epp << 24; | 719 | value |= (u64)epp << 24; |
726 | ret = wrmsrl_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST, value); | 720 | ret = wrmsrl_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST, value); |
@@ -794,25 +788,32 @@ static struct freq_attr *hwp_cpufreq_attrs[] = { | |||
794 | NULL, | 788 | NULL, |
795 | }; | 789 | }; |
796 | 790 | ||
797 | static void intel_pstate_hwp_set(unsigned int cpu) | 791 | static void intel_pstate_get_hwp_max(unsigned int cpu, int *phy_max, |
792 | int *current_max) | ||
798 | { | 793 | { |
799 | struct cpudata *cpu_data = all_cpu_data[cpu]; | 794 | u64 cap; |
800 | int min, hw_min, max, hw_max; | ||
801 | u64 value, cap; | ||
802 | s16 epp; | ||
803 | 795 | ||
804 | rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap); | 796 | rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap); |
805 | hw_min = HWP_LOWEST_PERF(cap); | ||
806 | if (global.no_turbo) | 797 | if (global.no_turbo) |
807 | hw_max = HWP_GUARANTEED_PERF(cap); | 798 | *current_max = HWP_GUARANTEED_PERF(cap); |
808 | else | 799 | else |
809 | hw_max = HWP_HIGHEST_PERF(cap); | 800 | *current_max = HWP_HIGHEST_PERF(cap); |
801 | |||
802 | *phy_max = HWP_HIGHEST_PERF(cap); | ||
803 | } | ||
804 | |||
805 | static void intel_pstate_hwp_set(unsigned int cpu) | ||
806 | { | ||
807 | struct cpudata *cpu_data = all_cpu_data[cpu]; | ||
808 | int max, min; | ||
809 | u64 value; | ||
810 | s16 epp; | ||
811 | |||
812 | max = cpu_data->max_perf_ratio; | ||
813 | min = cpu_data->min_perf_ratio; | ||
810 | 814 | ||
811 | max = fp_ext_toint(hw_max * cpu_data->max_perf); | ||
812 | if (cpu_data->policy == CPUFREQ_POLICY_PERFORMANCE) | 815 | if (cpu_data->policy == CPUFREQ_POLICY_PERFORMANCE) |
813 | min = max; | 816 | min = max; |
814 | else | ||
815 | min = fp_ext_toint(hw_max * cpu_data->min_perf); | ||
816 | 817 | ||
817 | rdmsrl_on_cpu(cpu, MSR_HWP_REQUEST, &value); | 818 | rdmsrl_on_cpu(cpu, MSR_HWP_REQUEST, &value); |
818 | 819 | ||
@@ -1528,8 +1529,7 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu) | |||
1528 | 1529 | ||
1529 | update_turbo_state(); | 1530 | update_turbo_state(); |
1530 | pstate = intel_pstate_get_base_pstate(cpu); | 1531 | pstate = intel_pstate_get_base_pstate(cpu); |
1531 | pstate = max(cpu->pstate.min_pstate, | 1532 | pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio); |
1532 | fp_ext_toint(pstate * cpu->max_perf)); | ||
1533 | intel_pstate_set_pstate(cpu, pstate); | 1533 | intel_pstate_set_pstate(cpu, pstate); |
1534 | } | 1534 | } |
1535 | 1535 | ||
@@ -1616,9 +1616,6 @@ static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) | |||
1616 | int32_t busy_frac, boost; | 1616 | int32_t busy_frac, boost; |
1617 | int target, avg_pstate; | 1617 | int target, avg_pstate; |
1618 | 1618 | ||
1619 | if (cpu->policy == CPUFREQ_POLICY_PERFORMANCE) | ||
1620 | return cpu->pstate.turbo_pstate; | ||
1621 | |||
1622 | busy_frac = div_fp(sample->mperf, sample->tsc); | 1619 | busy_frac = div_fp(sample->mperf, sample->tsc); |
1623 | 1620 | ||
1624 | boost = cpu->iowait_boost; | 1621 | boost = cpu->iowait_boost; |
@@ -1655,9 +1652,6 @@ static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu) | |||
1655 | int32_t perf_scaled, max_pstate, current_pstate, sample_ratio; | 1652 | int32_t perf_scaled, max_pstate, current_pstate, sample_ratio; |
1656 | u64 duration_ns; | 1653 | u64 duration_ns; |
1657 | 1654 | ||
1658 | if (cpu->policy == CPUFREQ_POLICY_PERFORMANCE) | ||
1659 | return cpu->pstate.turbo_pstate; | ||
1660 | |||
1661 | /* | 1655 | /* |
1662 | * perf_scaled is the ratio of the average P-state during the last | 1656 | * perf_scaled is the ratio of the average P-state during the last |
1663 | * sampling period to the P-state requested last time (in percent). | 1657 | * sampling period to the P-state requested last time (in percent). |
@@ -1695,9 +1689,8 @@ static int intel_pstate_prepare_request(struct cpudata *cpu, int pstate) | |||
1695 | int max_pstate = intel_pstate_get_base_pstate(cpu); | 1689 | int max_pstate = intel_pstate_get_base_pstate(cpu); |
1696 | int min_pstate; | 1690 | int min_pstate; |
1697 | 1691 | ||
1698 | min_pstate = max(cpu->pstate.min_pstate, | 1692 | min_pstate = max(cpu->pstate.min_pstate, cpu->min_perf_ratio); |
1699 | fp_ext_toint(max_pstate * cpu->min_perf)); | 1693 | max_pstate = max(min_pstate, cpu->max_perf_ratio); |
1700 | max_pstate = max(min_pstate, fp_ext_toint(max_pstate * cpu->max_perf)); | ||
1701 | return clamp_t(int, pstate, min_pstate, max_pstate); | 1694 | return clamp_t(int, pstate, min_pstate, max_pstate); |
1702 | } | 1695 | } |
1703 | 1696 | ||
@@ -1733,16 +1726,6 @@ static void intel_pstate_adjust_pstate(struct cpudata *cpu, int target_pstate) | |||
1733 | fp_toint(cpu->iowait_boost * 100)); | 1726 | fp_toint(cpu->iowait_boost * 100)); |
1734 | } | 1727 | } |
1735 | 1728 | ||
1736 | static void intel_pstate_update_util_hwp(struct update_util_data *data, | ||
1737 | u64 time, unsigned int flags) | ||
1738 | { | ||
1739 | struct cpudata *cpu = container_of(data, struct cpudata, update_util); | ||
1740 | u64 delta_ns = time - cpu->sample.time; | ||
1741 | |||
1742 | if ((s64)delta_ns >= INTEL_PSTATE_HWP_SAMPLING_INTERVAL) | ||
1743 | intel_pstate_sample(cpu, time); | ||
1744 | } | ||
1745 | |||
1746 | static void intel_pstate_update_util_pid(struct update_util_data *data, | 1729 | static void intel_pstate_update_util_pid(struct update_util_data *data, |
1747 | u64 time, unsigned int flags) | 1730 | u64 time, unsigned int flags) |
1748 | { | 1731 | { |
@@ -1934,6 +1917,9 @@ static void intel_pstate_set_update_util_hook(unsigned int cpu_num) | |||
1934 | { | 1917 | { |
1935 | struct cpudata *cpu = all_cpu_data[cpu_num]; | 1918 | struct cpudata *cpu = all_cpu_data[cpu_num]; |
1936 | 1919 | ||
1920 | if (hwp_active) | ||
1921 | return; | ||
1922 | |||
1937 | if (cpu->update_util_set) | 1923 | if (cpu->update_util_set) |
1938 | return; | 1924 | return; |
1939 | 1925 | ||
@@ -1967,52 +1953,61 @@ static void intel_pstate_update_perf_limits(struct cpufreq_policy *policy, | |||
1967 | { | 1953 | { |
1968 | int max_freq = intel_pstate_get_max_freq(cpu); | 1954 | int max_freq = intel_pstate_get_max_freq(cpu); |
1969 | int32_t max_policy_perf, min_policy_perf; | 1955 | int32_t max_policy_perf, min_policy_perf; |
1956 | int max_state, turbo_max; | ||
1970 | 1957 | ||
1971 | max_policy_perf = div_ext_fp(policy->max, max_freq); | 1958 | /* |
1972 | max_policy_perf = clamp_t(int32_t, max_policy_perf, 0, int_ext_tofp(1)); | 1959 | * HWP needs some special consideration, because on BDX the |
1960 | * HWP_REQUEST uses abstract value to represent performance | ||
1961 | * rather than pure ratios. | ||
1962 | */ | ||
1963 | if (hwp_active) { | ||
1964 | intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state); | ||
1965 | } else { | ||
1966 | max_state = intel_pstate_get_base_pstate(cpu); | ||
1967 | turbo_max = cpu->pstate.turbo_pstate; | ||
1968 | } | ||
1969 | |||
1970 | max_policy_perf = max_state * policy->max / max_freq; | ||
1973 | if (policy->max == policy->min) { | 1971 | if (policy->max == policy->min) { |
1974 | min_policy_perf = max_policy_perf; | 1972 | min_policy_perf = max_policy_perf; |
1975 | } else { | 1973 | } else { |
1976 | min_policy_perf = div_ext_fp(policy->min, max_freq); | 1974 | min_policy_perf = max_state * policy->min / max_freq; |
1977 | min_policy_perf = clamp_t(int32_t, min_policy_perf, | 1975 | min_policy_perf = clamp_t(int32_t, min_policy_perf, |
1978 | 0, max_policy_perf); | 1976 | 0, max_policy_perf); |
1979 | } | 1977 | } |
1980 | 1978 | ||
1979 | pr_debug("cpu:%d max_state %d min_policy_perf:%d max_policy_perf:%d\n", | ||
1980 | policy->cpu, max_state, | ||
1981 | min_policy_perf, max_policy_perf); | ||
1982 | |||
1981 | /* Normalize user input to [min_perf, max_perf] */ | 1983 | /* Normalize user input to [min_perf, max_perf] */ |
1982 | if (per_cpu_limits) { | 1984 | if (per_cpu_limits) { |
1983 | cpu->min_perf = min_policy_perf; | 1985 | cpu->min_perf_ratio = min_policy_perf; |
1984 | cpu->max_perf = max_policy_perf; | 1986 | cpu->max_perf_ratio = max_policy_perf; |
1985 | } else { | 1987 | } else { |
1986 | int32_t global_min, global_max; | 1988 | int32_t global_min, global_max; |
1987 | 1989 | ||
1988 | /* Global limits are in percent of the maximum turbo P-state. */ | 1990 | /* Global limits are in percent of the maximum turbo P-state. */ |
1989 | global_max = percent_ext_fp(global.max_perf_pct); | 1991 | global_max = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100); |
1990 | global_min = percent_ext_fp(global.min_perf_pct); | 1992 | global_min = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100); |
1991 | if (max_freq != cpu->pstate.turbo_freq) { | ||
1992 | int32_t turbo_factor; | ||
1993 | |||
1994 | turbo_factor = div_ext_fp(cpu->pstate.turbo_pstate, | ||
1995 | cpu->pstate.max_pstate); | ||
1996 | global_min = mul_ext_fp(global_min, turbo_factor); | ||
1997 | global_max = mul_ext_fp(global_max, turbo_factor); | ||
1998 | } | ||
1999 | global_min = clamp_t(int32_t, global_min, 0, global_max); | 1993 | global_min = clamp_t(int32_t, global_min, 0, global_max); |
2000 | 1994 | ||
2001 | cpu->min_perf = max(min_policy_perf, global_min); | 1995 | pr_debug("cpu:%d global_min:%d global_max:%d\n", policy->cpu, |
2002 | cpu->min_perf = min(cpu->min_perf, max_policy_perf); | 1996 | global_min, global_max); |
2003 | cpu->max_perf = min(max_policy_perf, global_max); | ||
2004 | cpu->max_perf = max(min_policy_perf, cpu->max_perf); | ||
2005 | 1997 | ||
2006 | /* Make sure min_perf <= max_perf */ | 1998 | cpu->min_perf_ratio = max(min_policy_perf, global_min); |
2007 | cpu->min_perf = min(cpu->min_perf, cpu->max_perf); | 1999 | cpu->min_perf_ratio = min(cpu->min_perf_ratio, max_policy_perf); |
2008 | } | 2000 | cpu->max_perf_ratio = min(max_policy_perf, global_max); |
2001 | cpu->max_perf_ratio = max(min_policy_perf, cpu->max_perf_ratio); | ||
2009 | 2002 | ||
2010 | cpu->max_perf = round_up(cpu->max_perf, EXT_FRAC_BITS); | 2003 | /* Make sure min_perf <= max_perf */ |
2011 | cpu->min_perf = round_up(cpu->min_perf, EXT_FRAC_BITS); | 2004 | cpu->min_perf_ratio = min(cpu->min_perf_ratio, |
2005 | cpu->max_perf_ratio); | ||
2012 | 2006 | ||
2013 | pr_debug("cpu:%d max_perf_pct:%d min_perf_pct:%d\n", policy->cpu, | 2007 | } |
2014 | fp_ext_toint(cpu->max_perf * 100), | 2008 | pr_debug("cpu:%d max_perf_ratio:%d min_perf_ratio:%d\n", policy->cpu, |
2015 | fp_ext_toint(cpu->min_perf * 100)); | 2009 | cpu->max_perf_ratio, |
2010 | cpu->min_perf_ratio); | ||
2016 | } | 2011 | } |
2017 | 2012 | ||
2018 | static int intel_pstate_set_policy(struct cpufreq_policy *policy) | 2013 | static int intel_pstate_set_policy(struct cpufreq_policy *policy) |
@@ -2039,10 +2034,10 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) | |||
2039 | */ | 2034 | */ |
2040 | intel_pstate_clear_update_util_hook(policy->cpu); | 2035 | intel_pstate_clear_update_util_hook(policy->cpu); |
2041 | intel_pstate_max_within_limits(cpu); | 2036 | intel_pstate_max_within_limits(cpu); |
2037 | } else { | ||
2038 | intel_pstate_set_update_util_hook(policy->cpu); | ||
2042 | } | 2039 | } |
2043 | 2040 | ||
2044 | intel_pstate_set_update_util_hook(policy->cpu); | ||
2045 | |||
2046 | if (hwp_active) | 2041 | if (hwp_active) |
2047 | intel_pstate_hwp_set(policy->cpu); | 2042 | intel_pstate_hwp_set(policy->cpu); |
2048 | 2043 | ||
@@ -2115,8 +2110,8 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy) | |||
2115 | 2110 | ||
2116 | cpu = all_cpu_data[policy->cpu]; | 2111 | cpu = all_cpu_data[policy->cpu]; |
2117 | 2112 | ||
2118 | cpu->max_perf = int_ext_tofp(1); | 2113 | cpu->max_perf_ratio = 0xFF; |
2119 | cpu->min_perf = 0; | 2114 | cpu->min_perf_ratio = 0; |
2120 | 2115 | ||
2121 | policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling; | 2116 | policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling; |
2122 | policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling; | 2117 | policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling; |
@@ -2558,7 +2553,6 @@ static int __init intel_pstate_init(void) | |||
2558 | } else { | 2553 | } else { |
2559 | hwp_active++; | 2554 | hwp_active++; |
2560 | intel_pstate.attr = hwp_cpufreq_attrs; | 2555 | intel_pstate.attr = hwp_cpufreq_attrs; |
2561 | pstate_funcs.update_util = intel_pstate_update_util_hwp; | ||
2562 | goto hwp_cpu_matched; | 2556 | goto hwp_cpu_matched; |
2563 | } | 2557 | } |
2564 | } else { | 2558 | } else { |