diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2007-08-23 09:18:02 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2007-08-23 09:18:02 -0400 |
commit | f8700df7c419781efb34696de7e7f49717f8ede7 (patch) | |
tree | c3b4c8e563e1caf5e144310817fc1ecdf812ae41 /kernel | |
parent | efe567fc8281661524ffa75477a7c4ca9b466c63 (diff) |
sched: fix broken SMT/MC optimizations
On a four package system with HT - HT load balancing optimizations were
broken. For example, if two tasks end up running on two logical threads
of one of the packages, scheduler is not able to pull one of the tasks
to a completely idle package.
In this scenario, for nice-0 tasks, imbalance calculated by scheduler
will be 512 and find_busiest_queue() will return 0 (as each cpu's load
is 1024 > imbalance and has only one task running).
Similarly MC scheduler optimizations also get fixed with this patch.
[ mingo@elte.hu: restored fair balancing by increasing the fuzz and
adding it back to the power decision, without the /2
factor. ]
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 5fecbbba12ac..d96030db8ff7 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -2517,7 +2517,7 @@ group_next: | |||
2517 | * a think about bumping its value to force at least one task to be | 2517 | * a think about bumping its value to force at least one task to be |
2518 | * moved | 2518 | * moved |
2519 | */ | 2519 | */ |
2520 | if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) { | 2520 | if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) { |
2521 | unsigned long tmp, pwr_now, pwr_move; | 2521 | unsigned long tmp, pwr_now, pwr_move; |
2522 | unsigned int imbn; | 2522 | unsigned int imbn; |
2523 | 2523 | ||