| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
| |
If a timer happens to go off at the wrong time on the wrong CPU, the old
code could deadlock. This avoided in the new version by dropping the
release_lock before using the hrtimer API.
|
|
|
|
|
|
|
|
| |
This fixes a bug observed under G-EDF:
A task with extremely low wcet budget could get scheduled
and re-added to its own release heap while the heap was still
in use in the same timer callback that released the task.
Havoc, predictably, ensued.
|
|
|
|
|
|
| |
Don't just blindly overwrite the timers if they could
be still in use. The the use-after-free bug was observed
under Qemu.
|
|
|
|
|
| |
If the timeout was realy close and the timing unfortunate, the
timer callback got invoked twice, with oopsing consequences.
|
| |
|
|
|
|
|
| |
A release master is a CPU that takes all timer interrupts for
release of a given rt_domain_t. By default off.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't multiplex one timer among all release heaps.
The hrtimer subsystem can handle many timers and is
heavily optimize; make use of this fact.
This also greatly simplifies the actual callback, which
should help to bring down release overheads.
This also saves memory as we do not need to maintain a separate
heap of (release) heaps.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bunch of bugs:
- a memory leak (some heap nodes were never reclaimed)
- a locking rules violation (memory allocation can wake up kswapd,
which causes a circular locking dependency between the release queue lock
and the run queue locks
Also, allocating memory in a hot path was never particularly clever.
This patch pre-allocates memory at task-creation time, but does not change
how the release queue works overall.
|
|
|
|
|
|
|
|
| |
In the infrequent case that arm_release_timer() is called
while the timer callback is executing on the same CPU
(e.g., a timer tick occurred), then hrtimer_cancel() will
get stuck in an infinite loop. This can be avoided by using
hrtimer_try_to_cancel() instead.
|
|
|
|
|
|
|
|
| |
struct x* p = kmalloc(sizeof(struct x), ....)
becomes
struct x* p = kmalloc(sizeof(*p), ...)
for example.
|
|
|
|
|
| |
This provides and hooks up a new made-from-scratch sched_trace()
implementation based on Feather-Trace and ftdev.
|
|
|
|
|
| |
This patch fixes some minor issues that inadvertedly crept in during
development. Found in John's review.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having hrtimers for each task, we now have only
one hrtimer per rt_domain. To-be-released tasks are grouped in
mergable heaps and presented as a bunch on each release.
This approach should allow us to reduce the worst-case overhead
at hyperperiod boundaries significantly.
1) less hrtimer overhead
2) less calls to the active plugin
3) only one CPU handles releases at a time
4) (2) & (3) should bring down lock contention significantly
|
|
|
|
|
|
|
|
| |
- TICK
- SCHED1
- SCHED2
- RELEASE
- CXS
|
|
|
|
|
|
| |
The list-based priority queues did not perform well on the Niagara T2000.
This heap-based implementation should perform much faster when queues
are long.
|
| |
|
|
|
|
|
| |
As discovered by John, the old way of initializing the spin locks
breaks when rt_domains are allocated dynamically.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On the T2000, GSN-EDF would eventually deadlock. This was caused by a deadlock
involving the release and ready locks of its rt_domain.
Usually, the dependency in GSN-EDF is:
function held -> required
=========================================
enqueue: ready -> release
gsnedf_job_release: -> ready
arm_release_timer: -> release
So far so good. The problem arose when hrtimer detected that the timer being
armed had already expired (must have been just a few nanoseconds) and decided
to call the callback directly WHILE HOLDING THE RELEASE LOCK. Which led to the
following result:
function held -> required CPU
====================================================
enqueue: ready -> release 5
gsnedf_job_release: release -> ready 6
We avoid this problem now by dropping the release lock in arm_release_timer()
before calling the job release callback.
|
|
|
|
| |
Added one message and improved another.
|
|
|
|
|
| |
We can't use tasklets from within the scheduler.
User no_rqlock_work instead.
|
| |
|
|
|
|
| |
John's proposal for how to release jobs with hrtimers.
|
|
This introduces the core changes ported from LITMUS 2007.
The kernel seems to work under QEMU, but many bugs probably remain.
|