diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 11:15:51 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 12:32:50 -0500 |
commit | e12f31d3e5d36328c7fbd0fce40a95e70b59152c (patch) | |
tree | 3eaee7fede5ba830395d2e527fdfe60f1aba73f4 /include/linux | |
parent | b42e0c41a422a212ddea0666d5a3a0e3c35206db (diff) |
sched: Remove avg_overlap
Both avg_overlap and avg_wakeup had an inherent problem in that their accuracy
was detrimentally affected by cross-cpu wakeups, this because we are missing
the necessary call to update_curr(). This can't be fixed without increasing
overhead in our already too fat fastpath.
Additionally, with recent load balancing changes making us prefer to place tasks
in an idle cache domain (which is good for compute bound loads), communicating
tasks suffer when a sync wakeup, which would enable affine placement, is turned
into a non-sync wakeup by SYNC_LESS. With one task on the runqueue, wake_affine()
rejects the affine wakeup request, leaving the unfortunate where placed, taking
frequent cache misses.
Remove it, and recover some fastpath cycles.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301121.6785.30.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/sched.h | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 70c560f5ada0..8604884cee87 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h | |||
@@ -1180,9 +1180,6 @@ struct sched_entity { | |||
1180 | u64 vruntime; | 1180 | u64 vruntime; |
1181 | u64 prev_sum_exec_runtime; | 1181 | u64 prev_sum_exec_runtime; |
1182 | 1182 | ||
1183 | u64 last_wakeup; | ||
1184 | u64 avg_overlap; | ||
1185 | |||
1186 | u64 nr_migrations; | 1183 | u64 nr_migrations; |
1187 | 1184 | ||
1188 | #ifdef CONFIG_SCHEDSTATS | 1185 | #ifdef CONFIG_SCHEDSTATS |