diff options
author | Paulo Marques <pmarques@grupopie.com> | 2005-05-05 19:15:49 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-05-05 19:36:41 -0400 |
commit | b7e4e85337060354f8b860cc38066725559313a4 (patch) | |
tree | bdb958c6002fee2d73ed51e78d71dc663d2ad297 /arch/alpha/lib/srm_printk.c | |
parent | f0fbd5fc09b20f7ba7bc8c80be33e39925bb38e1 (diff) |
[PATCH] setitimer timer expires too early
It seems that the code responsible for this is in kernel/itimer.c:126:
p->signal->real_timer.expires = jiffies + interval;
add_timer(&p->signal->real_timer);
If you request an interval of, lets say 900 usecs, the interval given by
timeval_to_jiffies will be 1.
If you request this when we are half-way between two timer ticks, the
interval will only give 400 usecs.
If we want to guarantee that we never ever give intervals less than
requested, the simple solution would be to change that to:
p->signal->real_timer.expires = jiffies + interval + 1;
This however will produce pathological cases, like having a idle system
being requested 1 ms timeouts will give systematically 2 ms timeouts,
whereas currently it simply gives a few usecs less than 1 ms.
The complex (and more computationally expensive) solution would be to
check the gettimeofday time, and compute the correct number of jiffies.
This way, if we request a 300 usecs timer 200 usecs inside the timer
tick, we can wait just one tick, but not if we are 800 usecs inside the
tick. This would also mean that we would have to lock preemption during
these computations to avoid races, etc.
I've searched the archives but couldn't find this particular issue being
discussed before.
Attached is a patch to do the simple solution, in case anybody thinks
that it should be used.
Signed-Off-By: Paulo Marques <pmarques@grupopie.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/alpha/lib/srm_printk.c')
0 files changed, 0 insertions, 0 deletions