diff options
| author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-06-27 07:41:25 -0400 |
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2008-06-27 08:31:38 -0400 |
| commit | 039a1c41b3a489e34593ea1e1687f6fdad6b13ab (patch) | |
| tree | 162e9b0b7a2472be292089bf4f84623bdc6f34ab /kernel/sched.c | |
| parent | 3e5459b4bea3ca2618cc02d56d12639f2cba531d (diff) | |
sched: fix sched_balance_self() smp group balancing
Finding the least idle cpu is more accurate when done with updated shares.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
| -rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index cdd09462fc9..39d5495540d 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
| @@ -2128,6 +2128,9 @@ static int sched_balance_self(int cpu, int flag) | |||
| 2128 | sd = tmp; | 2128 | sd = tmp; |
| 2129 | } | 2129 | } |
| 2130 | 2130 | ||
| 2131 | if (sd) | ||
| 2132 | update_shares(sd); | ||
| 2133 | |||
| 2131 | while (sd) { | 2134 | while (sd) { |
| 2132 | cpumask_t span, tmpmask; | 2135 | cpumask_t span, tmpmask; |
| 2133 | struct sched_group *group; | 2136 | struct sched_group *group; |
