| Commit message (Collapse) | Author | Age |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This fixes a bunch of bugs:
- a memory leak (some heap nodes were never reclaimed)
- a locking rules violation (memory allocation can wake up kswapd,
which causes a circular locking dependency between the release queue lock
and the run queue locks
Also, allocating memory in a hot path was never particularly clever.
This patch pre-allocates memory at task-creation time, but does not change
how the release queue works overall.
|
| | |
| | |
| | |
| | | |
Prepare for centralized allocation of release heaps.
|
| | |
| | |
| | |
| | | |
Avoid duplication.
|
| | |
| | |
| | |
| | |
| | | |
Comparing a task against itself shouldn't
happen if everything works correctly.
|
| | |
| | |
| | |
| | | |
Support for Global Quantum-Driven EDF
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the infrequent case that arm_release_timer() is called
while the timer callback is executing on the same CPU
(e.g., a timer tick occurred), then hrtimer_cancel() will
get stuck in an infinite loop. This can be avoided by using
hrtimer_try_to_cancel() instead.
|
| | |
| | |
| | |
| | |
| | | |
Don't confuse the plugins by calling task_block()/wake_up()
when they wouldn't have been called without the race.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Other quantum-based plugins also require the present flag. Hence,
move it to the core data structure
|
| |/
|/| |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
A simple noop system call to record kernel entry and exit
times.
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
struct x* p = kmalloc(sizeof(struct x), ....)
becomes
struct x* p = kmalloc(sizeof(*p), ...)
for example.
|
|/ |
|
|
|
|
|
|
| |
- better tracing
- cleaner release code
- add FIXME
|
|
|
|
|
| |
- fix end of tick preempt logic
- make scheduler resilient against missed quanta
|
| |
|
|
|
|
| |
Quiets a warning.
|
|
|
|
|
| |
1000 was limiting since the sync. release code
wants a 1 second delay per default.
|
|
|
|
| |
Much of the tracing is not needed anymore.
|
| |
|
|
|
|
| |
Used to correct for staggering in time->quanta computations
|
|
|
|
| |
The list traversal code was horribly broken.
|
| |
|
|
|
|
|
| |
The previous version did not get the release
time right in most cases.
|
|
|
|
|
| |
-1 doesn't make sense for an unsigned field and screws up
the log viewer.
|
|
|
|
| |
Give plugins a chance to set up state and clean up.
|
|
|
|
| |
This is required to visualize the schedule.
|
|
|
|
| |
The arguments were not printed correctly.
|
|
|
|
|
|
| |
The reschedule test in scheduler_tick() was missing the case
when the CPU has to preempt a real-time task in favor of background
work.
|
| |
|
|
|
|
|
| |
This bug created unkillable tasks if the first event did not
match the event that was being disabled.
|
|
|
|
|
| |
Not-yet-released jobs were not properly queued because of an overly complicated
and wrong requeue implementation. Found by visualizing sched_traces.
|
| |
|
|
|
|
| |
Using both sched_clock() and litmus_clock() leads to garbled traces.
|
|
|
|
|
| |
Due to bad ordering of ()-pairs, the priority function
was not correct. This should fix it.
|
|
|
|
| |
If we can't allocate heap nodes then we can't admit RT tasks.
|
|
|
|
|
| |
This provides and hooks up a new made-from-scratch sched_trace()
implementation based on Feather-Trace and ftdev.
|
|
|
|
| |
This is useful to deny opening per-cpu buffers if a CPU is not online.
|
|
|
|
|
| |
Otherwise required buffers may not be present anymore.
This patch also fixes some minor initialization issues.
|
|
|
|
| |
Much cleaner code now.
|
|
|
|
| |
This will help to redruce code duplication in the long run.
|
|
|
|
| |
We don't always need the file in the kernel.
|
|
|
|
|
| |
This patch fixes some minor issues that inadvertedly crept in during
development. Found in John's review.
|
|
|
|
|
|
|
| |
We already have a heap_node slab in litmus.c. We can reuse it for the
other allocations.
This patch also fixes a misnaming of heap_node_alloc/free.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
This should be faster on high CPU counts.
The list-based implementation was O(1) + O(N) for a position update.
The heap-based implementation is O(log N) + O(log N).
|
| |
| |
| |
| |
| |
| | |
If the first argument to edf_higher_prio is NULL, then just
return 0, since it can't possibly have a higher piority than the
second argument.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of having hrtimers for each task, we now have only
one hrtimer per rt_domain. To-be-released tasks are grouped in
mergable heaps and presented as a bunch on each release.
This approach should allow us to reduce the worst-case overhead
at hyperperiod boundaries significantly.
1) less hrtimer overhead
2) less calls to the active plugin
3) only one CPU handles releases at a time
4) (2) & (3) should bring down lock contention significantly
|
|/
|
|
| |
Avoid division by zero.
|