diff options
author | Rik van Riel <riel@redhat.com> | 2014-06-08 16:55:57 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-06-08 17:35:05 -0400 |
commit | 1662867a9b2574bfdb9d4e97186aa131218d7210 (patch) | |
tree | 6c8408360b9dceecad772dee5b9152a1515cf401 /kernel/sched | |
parent | b738d764652dc5aab1c8939f637112981fce9e0e (diff) |
numa,sched: fix load_to_imbalanced logic inversion
This function is supposed to return true if the new load imbalance is
worse than the old one. It didn't. I can only hope brown paper bags
are in style.
Now things converge much better on both the 4 node and 8 node systems.
I am not sure why this did not seem to impact specjbb performance on the
4 node system, which is the system I have full-time access to.
This bug was introduced recently, with commit e63da03639cc ("sched/numa:
Allow task switch if load imbalance improves")
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 17de1956ddad..9855e87d671a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c | |||
@@ -1120,7 +1120,7 @@ static bool load_too_imbalanced(long orig_src_load, long orig_dst_load, | |||
1120 | old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct; | 1120 | old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct; |
1121 | 1121 | ||
1122 | /* Would this change make things worse? */ | 1122 | /* Would this change make things worse? */ |
1123 | return (old_imb > imb); | 1123 | return (imb > old_imb); |
1124 | } | 1124 | } |
1125 | 1125 | ||
1126 | /* | 1126 | /* |