diff options
author | Paul Turner <pjt@google.com> | 2011-01-21 23:45:02 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-01-26 06:31:03 -0500 |
commit | 05ca62c6ca17f39b88fa956d5ebc1fa6e93ad5e3 (patch) | |
tree | a56f4f47dd23ef65ebb5c579adb1fbb29ba3faed /kernel/sched_fair.c | |
parent | b815f1963e47b9b69bb17e0588bd5af5b1114ae0 (diff) |
sched: Use rq->clock_task instead of rq->clock for correctly maintaining load averages
The delta in clock_task is a more fair attribution of how much time a tg has
been contributing load to the current cpu.
While not really important it also means we're more in sync (by magnitude)
with respect to periodic updates (since __update_curr deltas are clock_task
based).
Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20110122044852.007092349@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 1997383ba4d6..0c26e2df450e 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c | |||
@@ -725,7 +725,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update) | |||
725 | if (cfs_rq->tg == &root_task_group) | 725 | if (cfs_rq->tg == &root_task_group) |
726 | return; | 726 | return; |
727 | 727 | ||
728 | now = rq_of(cfs_rq)->clock; | 728 | now = rq_of(cfs_rq)->clock_task; |
729 | delta = now - cfs_rq->load_stamp; | 729 | delta = now - cfs_rq->load_stamp; |
730 | 730 | ||
731 | /* truncate load history at 4 idle periods */ | 731 | /* truncate load history at 4 idle periods */ |