diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2010-04-22 15:50:19 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-04-23 05:02:02 -0400 |
commit | 74f5187ac873042f502227701ed1727e7c5fbfa9 (patch) | |
tree | b200960d04b0a955aaf9a101d6f0a4ed34f07bb2 /kernel/sched_idletask.c | |
parent | 09a40af5240de02d848247ab82440ad75b31ab11 (diff) |
sched: Cure load average vs NO_HZ woes
Chase reported that due to us decrementing calc_load_task prematurely
(before the next LOAD_FREQ sample), the load average could be scewed
by as much as the number of CPUs in the machine.
This patch, based on Chase's patch, cures the problem by keeping the
delta of the CPU going into NO_HZ idle separately and folding that in
on the next LOAD_FREQ update.
This restores the balance and we get strict LOAD_FREQ period samples.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chase Douglas <chase.douglas@canonical.com>
LKML-Reference: <1271934490.1776.343.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_idletask.c')
-rw-r--r-- | kernel/sched_idletask.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c index bea2b8f12024..9fa0f402c87c 100644 --- a/kernel/sched_idletask.c +++ b/kernel/sched_idletask.c | |||
@@ -23,8 +23,7 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl | |||
23 | static struct task_struct *pick_next_task_idle(struct rq *rq) | 23 | static struct task_struct *pick_next_task_idle(struct rq *rq) |
24 | { | 24 | { |
25 | schedstat_inc(rq, sched_goidle); | 25 | schedstat_inc(rq, sched_goidle); |
26 | /* adjust the active tasks as we might go into a long sleep */ | 26 | calc_load_account_idle(rq); |
27 | calc_load_account_active(rq); | ||
28 | return rq->idle; | 27 | return rq->idle; |
29 | } | 28 | } |
30 | 29 | ||