diff options
author | Patrick Bellasi <patrick.bellasi@arm.com> | 2018-05-24 10:10:23 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-05-25 02:04:56 -0400 |
commit | 2539fc82aa9b07d968cf9ba1ffeec3e0416ac721 (patch) | |
tree | 255d1eb5d74eb49b11d4abfea24fe17d79a968fb /kernel/sched | |
parent | 8ecf04e11283a28ca88b8b8049ac93c3a99fcd2c (diff) |
sched/fair: Update util_est before updating schedutil
When a task is enqueued the estimated utilization of a CPU is updated
to better support the selection of the required frequency.
However, schedutil is (implicitly) updated by update_load_avg() which
always happens before util_est_{en,de}queue(), thus potentially
introducing a latency between estimated utilization updates and
frequency selections.
Let's update util_est at the beginning of enqueue_task_fair(),
which will ensure that all schedutil updates will see the most
updated estimated utilization value for a CPU.
Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
Link: http://lkml.kernel.org/r/20180524141023.13765-3-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 748cb054fefd..e497c05aab7f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c | |||
@@ -5386,6 +5386,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) | |||
5386 | struct sched_entity *se = &p->se; | 5386 | struct sched_entity *se = &p->se; |
5387 | 5387 | ||
5388 | /* | 5388 | /* |
5389 | * The code below (indirectly) updates schedutil which looks at | ||
5390 | * the cfs_rq utilization to select a frequency. | ||
5391 | * Let's add the task's estimated utilization to the cfs_rq's | ||
5392 | * estimated utilization, before we update schedutil. | ||
5393 | */ | ||
5394 | util_est_enqueue(&rq->cfs, p); | ||
5395 | |||
5396 | /* | ||
5389 | * If in_iowait is set, the code below may not trigger any cpufreq | 5397 | * If in_iowait is set, the code below may not trigger any cpufreq |
5390 | * utilization updates, so do it here explicitly with the IOWAIT flag | 5398 | * utilization updates, so do it here explicitly with the IOWAIT flag |
5391 | * passed. | 5399 | * passed. |
@@ -5426,7 +5434,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) | |||
5426 | if (!se) | 5434 | if (!se) |
5427 | add_nr_running(rq, 1); | 5435 | add_nr_running(rq, 1); |
5428 | 5436 | ||
5429 | util_est_enqueue(&rq->cfs, p); | ||
5430 | hrtick_update(rq); | 5437 | hrtick_update(rq); |
5431 | } | 5438 | } |
5432 | 5439 | ||