diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2007-09-05 08:32:48 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2007-09-05 08:32:48 -0400 |
commit | 7fd0d2dde929ead79901e389e70dbfb3c6c06986 (patch) | |
tree | 577c4626e1e6f1de79e41deaeea6699261c873aa | |
parent | b21010ed6498391c0f359f2a89c907533fe07fec (diff) |
sched: fix MC/HT scheduler optimization, without breaking the FUZZ logic.
First fix the check
if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
if (*imbalance < busiest_load_per_task)
As the current check is always false for nice 0 tasks (as
SCHED_LOAD_SCALE_FUZZ is same as busiest_load_per_task for nice 0
tasks).
With the above change, imbalance was getting reset to 0 in the corner
case condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the HT/MC
optimization is needed.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
-rw-r--r-- | kernel/sched.c | 8 |
1 files changed, 3 insertions, 5 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index b533d6db78aa..c8759ec6d8a9 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -2512,7 +2512,7 @@ group_next: | |||
2512 | * a think about bumping its value to force at least one task to be | 2512 | * a think about bumping its value to force at least one task to be |
2513 | * moved | 2513 | * moved |
2514 | */ | 2514 | */ |
2515 | if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) { | 2515 | if (*imbalance < busiest_load_per_task) { |
2516 | unsigned long tmp, pwr_now, pwr_move; | 2516 | unsigned long tmp, pwr_now, pwr_move; |
2517 | unsigned int imbn; | 2517 | unsigned int imbn; |
2518 | 2518 | ||
@@ -2564,10 +2564,8 @@ small_imbalance: | |||
2564 | pwr_move /= SCHED_LOAD_SCALE; | 2564 | pwr_move /= SCHED_LOAD_SCALE; |
2565 | 2565 | ||
2566 | /* Move if we gain throughput */ | 2566 | /* Move if we gain throughput */ |
2567 | if (pwr_move <= pwr_now) | 2567 | if (pwr_move > pwr_now) |
2568 | goto out_balanced; | 2568 | *imbalance = busiest_load_per_task; |
2569 | |||
2570 | *imbalance = busiest_load_per_task; | ||
2571 | } | 2569 | } |
2572 | 2570 | ||
2573 | return busiest; | 2571 | return busiest; |