diff options
author | Max Krasnyansky <maxk@qualcomm.com> | 2008-07-15 07:43:49 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-07-18 07:22:25 -0400 |
commit | e761b7725234276a802322549cee5255305a0930 (patch) | |
tree | 27b351a7d5fc9a93590e0effce1c5adb1bfcebc0 /kernel/sched_rt.c | |
parent | 7ebefa8ceefed44cc321be70afc54a585a68ac0b (diff) |
cpu hotplug, sched: Introduce cpu_active_map and redo sched domain managment (take 2)
This is based on Linus' idea of creating cpu_active_map that prevents
scheduler load balancer from migrating tasks to the cpu that is going
down.
It allows us to simplify domain management code and avoid unecessary
domain rebuilds during cpu hotplug event handling.
Please ignore the cpusets part for now. It needs some more work in order
to avoid crazy lock nesting. Although I did simplfy and unify domain
reinitialization logic. We now simply call partition_sched_domains() in
all the cases. This means that we're using exact same code paths as in
cpusets case and hence the test below cover cpusets too.
Cpuset changes to make rebuild_sched_domains() callable from various
contexts are in the separate patch (right next after this one).
This not only boots but also easily handles
while true; do make clean; make -j 8; done
and
while true; do on-off-cpu 1; done
at the same time.
(on-off-cpu 1 simple does echo 0/1 > /sys/.../cpu1/online thing).
Suprisingly the box (dual-core Core2) is quite usable. In fact I'm typing
this on right now in gnome-terminal and things are moving just fine.
Also this is running with most of the debug features enabled (lockdep,
mutex, etc) no BUG_ONs or lockdep complaints so far.
I believe I addressed all of the Dmitry's comments for original Linus'
version. I changed both fair and rt balancer to mask out non-active cpus.
And replaced cpu_is_offline() with !cpu_active() in the main scheduler
code where it made sense (to me).
Signed-off-by: Max Krasnyanskiy <maxk@qualcomm.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Gregory Haskins <ghaskins@novell.com>
Cc: dmitry.adamushko@gmail.com
Cc: pj@sgi.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_rt.c')
-rw-r--r-- | kernel/sched_rt.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c index d3d1cccb3d7b..50735bb96149 100644 --- a/kernel/sched_rt.c +++ b/kernel/sched_rt.c | |||
@@ -934,6 +934,13 @@ static int find_lowest_rq(struct task_struct *task) | |||
934 | return -1; /* No targets found */ | 934 | return -1; /* No targets found */ |
935 | 935 | ||
936 | /* | 936 | /* |
937 | * Only consider CPUs that are usable for migration. | ||
938 | * I guess we might want to change cpupri_find() to ignore those | ||
939 | * in the first place. | ||
940 | */ | ||
941 | cpus_and(*lowest_mask, *lowest_mask, cpu_active_map); | ||
942 | |||
943 | /* | ||
937 | * At this point we have built a mask of cpus representing the | 944 | * At this point we have built a mask of cpus representing the |
938 | * lowest priority tasks in the system. Now we want to elect | 945 | * lowest priority tasks in the system. Now we want to elect |
939 | * the best one based on our affinity and topology. | 946 | * the best one based on our affinity and topology. |