diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-05-03 12:29:28 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-05-05 17:56:18 -0400 |
commit | 3e51f33fcc7f55e6df25d15b55ed10c8b4da84cd (patch) | |
tree | 3752f9ea8e014ec40e95a1b197b0a3d18e1056a8 /kernel/sched_fair.c | |
parent | a5574cf65b5f03ce9ade3918764fe22e5e2371e3 (diff) |
sched: add optional support for CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
this replaces the rq->clock stuff (and possibly cpu_clock()).
- architectures that have an 'imperfect' hardware clock can set
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
- the 'jiffie' window might be superfulous when we update tick_gtod
before the __update_sched_clock() call in sched_clock_tick()
- cpu_clock() might be implemented as:
sched_clock_cpu(smp_processor_id())
if the accuracy proves good enough - how far can TSC drift in a
single jiffie when considering the filtering and idle hooks?
[ mingo@elte.hu: various fixes and cleanups ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index d99e01f6929a..c863663d204d 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c | |||
@@ -959,7 +959,7 @@ static void yield_task_fair(struct rq *rq) | |||
959 | return; | 959 | return; |
960 | 960 | ||
961 | if (likely(!sysctl_sched_compat_yield) && curr->policy != SCHED_BATCH) { | 961 | if (likely(!sysctl_sched_compat_yield) && curr->policy != SCHED_BATCH) { |
962 | __update_rq_clock(rq); | 962 | update_rq_clock(rq); |
963 | /* | 963 | /* |
964 | * Update run-time statistics of the 'current'. | 964 | * Update run-time statistics of the 'current'. |
965 | */ | 965 | */ |