diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-10-24 05:06:12 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-10-24 06:50:59 -0400 |
commit | 01c8c57d668d94f1036d9ab11a22aa24ca16a35d (patch) | |
tree | a5bad4df146982e55bdd9dba73912f6bace036df /kernel/sched.c | |
parent | 8c82a17e9c924c0e9f13e75e4c2f6bca19a4b516 (diff) |
sched: fix a find_busiest_group buglet
In one of the group load balancer patches:
commit 408ed066b11cf9ee4536573b4269ee3613bd735e
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Fri Jun 27 13:41:28 2008 +0200
Subject: sched: hierarchical load vs find_busiest_group
The following change:
- if (max_load - this_load + SCHED_LOAD_SCALE_FUZZ >=
+ if (max_load - this_load + 2*busiest_load_per_task >=
busiest_load_per_task * imbn) {
made the condition always true, because imbn is [1,2].
Therefore, remove the 2*, and give the it a fair chance.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 6625c3c4b10d..12bc367d9241 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -3344,7 +3344,7 @@ small_imbalance: | |||
3344 | } else | 3344 | } else |
3345 | this_load_per_task = cpu_avg_load_per_task(this_cpu); | 3345 | this_load_per_task = cpu_avg_load_per_task(this_cpu); |
3346 | 3346 | ||
3347 | if (max_load - this_load + 2*busiest_load_per_task >= | 3347 | if (max_load - this_load + busiest_load_per_task >= |
3348 | busiest_load_per_task * imbn) { | 3348 | busiest_load_per_task * imbn) { |
3349 | *imbalance = busiest_load_per_task; | 3349 | *imbalance = busiest_load_per_task; |
3350 | return busiest; | 3350 | return busiest; |