aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorSiddha, Suresh B <suresh.b.siddha@intel.com>2006-12-10 05:20:33 -0500
committerLinus Torvalds <torvalds@woody.osdl.org>2006-12-10 12:55:43 -0500
commit783609c6cb4eaa23f2ac5c968a44483584ec133f (patch)
tree678704bab2c69f5115ad84452e931adf4c11f3f4 /include
parentb18ec80396834497933d77b81ec0918519f4e2a7 (diff)
[PATCH] sched: decrease number of load balances
Currently at a particular domain, each cpu in the sched group will do a load balance at the frequency of balance_interval. More the cores and threads, more the cpus will be in each sched group at SMP and NUMA domain. And we endup spending quite a bit of time doing load balancing in those domains. Fix this by making only one cpu(first idle cpu or first cpu in the group if all the cpus are busy) in the sched group do the load balance at that particular sched domain and this load will slowly percolate down to the other cpus with in that group(when they do load balancing at lower domains). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/sched.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ea92e5c89089..72d6927d29ed 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -707,6 +707,7 @@ struct sched_domain {
707 unsigned long lb_hot_gained[MAX_IDLE_TYPES]; 707 unsigned long lb_hot_gained[MAX_IDLE_TYPES];
708 unsigned long lb_nobusyg[MAX_IDLE_TYPES]; 708 unsigned long lb_nobusyg[MAX_IDLE_TYPES];
709 unsigned long lb_nobusyq[MAX_IDLE_TYPES]; 709 unsigned long lb_nobusyq[MAX_IDLE_TYPES];
710 unsigned long lb_stopbalance[MAX_IDLE_TYPES];
710 711
711 /* Active load balancing */ 712 /* Active load balancing */
712 unsigned long alb_cnt; 713 unsigned long alb_cnt;