diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 11:15:38 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 12:32:50 -0500 |
commit | b42e0c41a422a212ddea0666d5a3a0e3c35206db (patch) | |
tree | 443cf5918548cab86c3f9f3f34a1b700d809070b /kernel/sched_debug.c | |
parent | 39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (diff) |
sched: Remove avg_wakeup
Testing the load which led to this heuristic (nfs4 kbuild) shows that it has
outlived it's usefullness. With intervening load balancing changes, I cannot
see any difference with/without, so recover there fastpath cycles.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301062.6785.29.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_debug.c')
-rw-r--r-- | kernel/sched_debug.c | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c index ad9df4422763..20b95a420fec 100644 --- a/kernel/sched_debug.c +++ b/kernel/sched_debug.c | |||
@@ -408,7 +408,6 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) | |||
408 | PN(se.vruntime); | 408 | PN(se.vruntime); |
409 | PN(se.sum_exec_runtime); | 409 | PN(se.sum_exec_runtime); |
410 | PN(se.avg_overlap); | 410 | PN(se.avg_overlap); |
411 | PN(se.avg_wakeup); | ||
412 | 411 | ||
413 | nr_switches = p->nvcsw + p->nivcsw; | 412 | nr_switches = p->nvcsw + p->nivcsw; |
414 | 413 | ||