aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorVincent Guittot <vincent.guittot@linaro.org>2014-01-22 02:45:34 -0500
committerIngo Molnar <mingo@kernel.org>2014-01-23 08:48:34 -0500
commit9390675af0835ae1d654d33bfcf16096028550ad (patch)
treed2215be2959476834e652137151e3151160eb4d9 /kernel/sched
parent15c81026204da897a05424c79263aea861a782cc (diff)
Revert "sched: Fix sleep time double accounting in enqueue entity"
This reverts commit 282cf499f03ec1754b6c8c945c9674b02631fb0f. With the current implementation, the load average statistics of a sched entity change according to other activity on the CPU even if this activity is done between the running window of the sched entity and have no influence on the running duration of the task. When a task wakes up on the same CPU, we currently update last_runnable_update with the return of __synchronize_entity_decay without updating the runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync the load_contrib of the se with the rq's blocked_load_contrib before removing it from the latter (with __synchronize_entity_decay) but we must keep last_runnable_update unchanged for updating runnable_avg_sum/period during the next update_entity_load_avg. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Ben Segall <bsegall@google.com> Cc: pjt@google.com Cc: alex.shi@linaro.org Link: http://lkml.kernel.org/r/1390376734-6800-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c8
1 files changed, 1 insertions, 7 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b24b6cfde9aa..efe6457ac5c8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2356,13 +2356,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
2356 } 2356 }
2357 wakeup = 0; 2357 wakeup = 0;
2358 } else { 2358 } else {
2359 /* 2359 __synchronize_entity_decay(se);
2360 * Task re-woke on same cpu (or else migrate_task_rq_fair()
2361 * would have made count negative); we must be careful to avoid
2362 * double-accounting blocked time after synchronizing decays.
2363 */
2364 se->avg.last_runnable_update += __synchronize_entity_decay(se)
2365 << 20;
2366 } 2360 }
2367 2361
2368 /* migrated tasks did not contribute to our blocked load */ 2362 /* migrated tasks did not contribute to our blocked load */