| Commit message (Collapse) | Author | Age |
| | |
|
| | |
|
| |
|
|
|
|
| |
move activate/deactivate hack to central place
remove runqueue knowledge from scheduler plugins
separate rt_param.h from accessor macros
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Always cleanup stale real-time flags, job numbers, etc.
|
| |
|
|
| |
- restore old best-effort priority
- allow non-rt tasks to register a np-flag
|
| |
|
| |
This will be the basis for the new BE->RT->BE system call.
|
| | |
|
| |
|
|
| |
This approach should both make the scheduler signals list more flexible
and fix the locking dependency detected by lockdep.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
use it to send weight change notifications
|
| |
|
|
| |
This concept is redundant with the per-service-level weights. Also fix
the get_sl() macro.
|
| | |
|
| |
|
|
| |
Not yet complete.
|
| |
|
|
| |
getting 64bit to work is too much of a pain right now
|
| |
|
|
| |
Adds slope and intercept to adaptive tasks and setup code.
|
| |
|
|
| |
No use in supporting something that isn't used.
|
| |
|
|
|
| |
Introduces fixed point math header stuff and start of predictor support in
sched_adaptive.c
|
| |
|
| |
Track error of allocation, prepare service level changes.
|
| |
|
|
| |
This patch adds fields for service levels to the rt_param struct.
|
| |
|
|
|
|
| |
This commit introduces the infrastructure for flag based np sections. It
also features an overhauled GSN-EDF scheduler that respects the flags (and
has less bugs).
|
| |\ |
|
| | |
| |
| |
| |
| | |
Adds the notion of a job sequence number to the LITMUS per-task state.
This will allow userspace to refer to specific jobs.
|
| | |
| |
| |
| | |
Some minor cleanups and some additional comments.
|
| |/ |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
- move struct pi_semaphore to place where it makes more sense (not included
everywhere)
- change semantics of scheduler plugin callbacks and add comments
- remove old unneeded code
- compile fixes
Note: The plugins don't actually work yet, since the semantics of the
callbacks have changed. That will be fixed in the next patch.
|
| |
|
|
|
|
|
|
|
|
| |
This change should fix the long standing problem that certain IO intensive
work loads would "crash" the kernel. What really happened was that
real-time tasks caught an interrupt right before they would have called
schedule as a part of a suspension anyway. As they were already in a
different state then TASK_RUNNING, they were not requeued and got lost. If
they were holding important locks (such as a TTY lock or the BKL), then
"bad things" happened...
|
| | |
|
| |
|
|
|
| |
which needs to be expanded in the LSO and by adding system call numbers in
kernel space.
|
| |
|
|
| |
semaphore.
|
| |
|
|
|
|
|
|
|
|
| |
PI semaphores now use scheduler callbacks to update priority on a down or
up call. Issues with calling down/up and not calling __down_failed or
__up_wakeup for PI semaphores has been noted (since priority-related things
occured in those calls) and up has been fixed (down is in process).
Callbacks are now responsible for the priority checks and
updates, and some subtle cases are handled that were missed before. Still
need to finish handling GSN-EDF implementation issues related to suspensions.
|
| |\
| |
| |
| |
| | |
Bjoern's changes + mine. Still need to fill in some stubs and ensure that
nothing broke.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| | |
semaphores accessible through system calls, so that they can be used with
the LSO (for partitioned FMLP).
|
| | |
| |
| |
| |
| |
| |
| |
| | |
A user now makes a system call where a semaphore ID is specified, and
the semaphore implementation is hidden away in the kernel. There are a
finite number of semaphores and the kernel system call will return an error
if it cannot claim one, however user space code in libso does not yet
acknowledge this.
|
| | |
| |
| |
| | |
inheritance.
|
| | |
| |
| |
| |
| |
| |
| | |
priority inheritance. Also fixed a few bugs. Many files were modified, as
the PI semaphores were are implementing replicate much of the original
Linux semaphore implementation with minor changes, often causing a cascade
of changes as functions were chased down and changed in several files.
|
| |/
|
|
|
|
|
|
| |
sys_wait_for_zone_exit waits on a flag which is cleared during the local
timer interrupt. Yet more race conditions have been avoided by performing
zone checks before waiting for the flag, and by setting the flag *before*
performing the zone check, so that if we enter the loop immediately after
leaving the blocking zone, we are still okay.
|
| |
|
|
|
|
|
|
| |
* Our old clone flag is already taken in 2.6.20.
* Fix wrong is_running() macro.
* Remove double ->finish_switch() call.
* Move sched_trace_scheduled to non-preemtible section.
* Allow next = idle task in RT mode.
|
| | |
|
| | |
|