diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 11:16:20 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 12:32:50 -0500 |
commit | a64692a3afd85fe048551ab89142fd5ca99a0dbd (patch) | |
tree | 7d2800efb7fb9e3aa5c99ab883004932fdc362c6 /kernel/sched_fair.c | |
parent | e12f31d3e5d36328c7fbd0fce40a95e70b59152c (diff) |
sched: Cleanup/optimize clock updates
Now that we no longer depend on the clock being updated prior to enqueueing
on migratory wakeup, we can clean up a bit, placing calls to update_rq_clock()
exactly where they are needed, ie on enqueue, dequeue and schedule events.
In the case of a freshly enqueued task immediately preempting, we can skip the
update during preemption, as the clock was just updated by the enqueue event.
We also save an unneeded call during a migratory wakeup by not updating the
previous runqueue, where update_curr() won't be invoked.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301199.6785.32.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index c3b69d4b5d65..69e582020ff8 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c | |||
@@ -3064,8 +3064,6 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu) | |||
3064 | 3064 | ||
3065 | /* move a task from busiest_rq to target_rq */ | 3065 | /* move a task from busiest_rq to target_rq */ |
3066 | double_lock_balance(busiest_rq, target_rq); | 3066 | double_lock_balance(busiest_rq, target_rq); |
3067 | update_rq_clock(busiest_rq); | ||
3068 | update_rq_clock(target_rq); | ||
3069 | 3067 | ||
3070 | /* Search for an sd spanning us and the target CPU. */ | 3068 | /* Search for an sd spanning us and the target CPU. */ |
3071 | for_each_domain(target_cpu, sd) { | 3069 | for_each_domain(target_cpu, sd) { |