diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2009-04-11 04:43:41 -0400 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2009-05-15 09:32:45 -0400 |
commit | dce48a84adf1806676319f6f480e30a6daa012f9 (patch) | |
tree | 79151f5d31d9c3dcdc723ab8877cb943b944890e /kernel/sched_idletask.c | |
parent | 2ff799d3cff1ecb274049378b28120ee5c1c5e5f (diff) |
sched, timers: move calc_load() to scheduler
Dimitri Sivanich noticed that xtime_lock is held write locked across
calc_load() which iterates over all online CPUs. That can cause long
latencies for xtime_lock readers on large SMP systems.
The load average calculation is an rough estimate anyway so there is
no real need to protect the readers vs. the update. It's not a problem
when the avenrun array is updated while a reader copies the values.
Instead of iterating over all online CPUs let the scheduler_tick code
update the number of active tasks shortly before the avenrun update
happens. The avenrun update itself is handled by the CPU which calls
do_timer().
[ Impact: reduce xtime_lock write locked section ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Diffstat (limited to 'kernel/sched_idletask.c')
-rw-r--r-- | kernel/sched_idletask.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c index 8a21a2e28c13..499672c10cbd 100644 --- a/kernel/sched_idletask.c +++ b/kernel/sched_idletask.c | |||
@@ -22,7 +22,8 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int sy | |||
22 | static struct task_struct *pick_next_task_idle(struct rq *rq) | 22 | static struct task_struct *pick_next_task_idle(struct rq *rq) |
23 | { | 23 | { |
24 | schedstat_inc(rq, sched_goidle); | 24 | schedstat_inc(rq, sched_goidle); |
25 | 25 | /* adjust the active tasks as we might go into a long sleep */ | |
26 | calc_load_account_active(rq); | ||
26 | return rq->idle; | 27 | return rq->idle; |
27 | } | 28 | } |
28 | 29 | ||