summaryrefslogtreecommitdiffstats
path: root/kernel/sched/sched.h
diff options
context:
space:
mode:
authorViresh Kumar <viresh.kumar@linaro.org>2017-07-28 02:46:38 -0400
committerRafael J. Wysocki <rafael.j.wysocki@intel.com>2017-08-01 08:24:53 -0400
commit674e75411fc260b0d4532701228cfe12fc090da8 (patch)
tree78752d7e6e2ec87c6f0bfb3c2b1bf2c6a7ed51dc /kernel/sched/sched.h
parent251accf98591d7f59f7a2bac2e05c66d16bf2811 (diff)
sched: cpufreq: Allow remote cpufreq callbacks
With Android UI and benchmarks the latency of cpufreq response to certain scheduling events can become very critical. Currently, callbacks into cpufreq governors are only made from the scheduler if the target CPU of the event is the same as the current CPU. This means there are certain situations where a target CPU may not run the cpufreq governor for some time. One testcase to show this behavior is where a task starts running on CPU0, then a new task is also spawned on CPU0 by a task on CPU1. If the system is configured such that the new tasks should receive maximum demand initially, this should result in CPU0 increasing frequency immediately. But because of the above mentioned limitation though, this does not occur. This patch updates the scheduler core to call the cpufreq callbacks for remote CPUs as well. The schedutil, ondemand and conservative governors are updated to process cpufreq utilization update hooks called for remote CPUs where the remote CPU is managed by the cpufreq policy of the local CPU. The intel_pstate driver is updated to always reject remote callbacks. This is tested with couple of usecases (Android: hackbench, recentfling, galleryfling, vellamo, Ubuntu: hackbench) on ARM hikey board (64 bit octa-core, single policy). Only galleryfling showed minor improvements, while others didn't had much deviation. The reason being that this patch only targets a corner case, where following are required to be true to improve performance and that doesn't happen too often with these tests: - Task is migrated to another CPU. - The task has high demand, and should take the target CPU to higher OPPs. - And the target CPU doesn't call into the cpufreq governor until the next tick. Based on initial work from Steve Muckle. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Saravana Kannan <skannan@codeaurora.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r--kernel/sched/sched.h10
1 files changed, 2 insertions, 8 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index eeef1a3086d1..aa9d5b87b4f8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2070,19 +2070,13 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
2070{ 2070{
2071 struct update_util_data *data; 2071 struct update_util_data *data;
2072 2072
2073 data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data)); 2073 data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
2074 cpu_of(rq)));
2074 if (data) 2075 if (data)
2075 data->func(data, rq_clock(rq), flags); 2076 data->func(data, rq_clock(rq), flags);
2076} 2077}
2077
2078static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags)
2079{
2080 if (cpu_of(rq) == smp_processor_id())
2081 cpufreq_update_util(rq, flags);
2082}
2083#else 2078#else
2084static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} 2079static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
2085static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {}
2086#endif /* CONFIG_CPU_FREQ */ 2080#endif /* CONFIG_CPU_FREQ */
2087 2081
2088#ifdef arch_scale_freq_capacity 2082#ifdef arch_scale_freq_capacity