| Commit message (Collapse) | Author | Age |
|
|
|
| |
we can disable it to improve performance when it is not needed
|
| |
|
| |
|
|
|
|
| |
NP-Flag support appears to be broken at the moment on SPARC64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the PFAIR plugin. The implementation is based on an earlier version
developed by John Calandrino.
Features:
- supports aligned and staggered quanta
- supports early releasing
- uses mergable heaps to limit overheads
This version still has a couple of limitations:
- no support for sporadic; all tasks are assumed to be periodic
- tasks are limited to a maximum period of 1000 quanta
- overload (especially in the tick) is not handled gracefully
|
|
|
|
|
| |
We probably don't get such a big buffer, but the allocation
code scales it down if such a large memory chunk is not available.
|
| |
|
|
|
|
|
| |
The default implementation contained a bug in a macro.
Also, add a cast to silence the compiler.
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
- move platform dependent bits into arch/
- provide a default implementation (used for sparc64)
|
| |
| |
| |
| |
| |
| |
| |
| | |
- TICK
- SCHED1
- SCHED2
- RELEASE
- CXS
|
| | |
|
| | |
|
|/
|
|
| |
Without this change, a BUG_ON() in schedule() triggers.
|
|
|
|
|
|
| |
The list-based priority queues did not perform well on the Niagara T2000.
This heap-based implementation should perform much faster when queues
are long.
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
PFAIR needs to do things a bit differently. This callback
will allow plugins to handle synchronous releases differently.
|
|/
|
|
| |
Make code more readable and shorter.
|
| |
|
|
|
|
|
|
|
| |
- don't deadlock on a runqueue lock in case of a bug inside
the scheduler
- write printk() messages to TRACE()
- tell user of TRACE() buffer
|
|
|
|
|
|
| |
Also:
- add some memory barriers, to be on the safe side
- fix some line breaks
|
|
|
|
|
| |
As discovered by John, the old way of initializing the spin locks
breaks when rt_domains are allocated dynamically.
|
|
|
|
|
| |
- give TRACE() messages sequence numbers
- remove a some old, unused cruft
|
|
|
|
|
|
| |
- remove outdated comment
- reorder stack_in_use marker to be in front of finish_lock_switch()
- add TRACE()s to try_to_wake_up()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On the T2000, GSN-EDF would eventually deadlock. This was caused by a deadlock
involving the release and ready locks of its rt_domain.
Usually, the dependency in GSN-EDF is:
function held -> required
=========================================
enqueue: ready -> release
gsnedf_job_release: -> ready
arm_release_timer: -> release
So far so good. The problem arose when hrtimer detected that the timer being
armed had already expired (must have been just a few nanoseconds) and decided
to call the callback directly WHILE HOLDING THE RELEASE LOCK. Which led to the
following result:
function held -> required CPU
====================================================
enqueue: ready -> release 5
gsnedf_job_release: release -> ready 6
We avoid this problem now by dropping the release lock in arm_release_timer()
before calling the job release callback.
|
|
|
|
|
| |
This is modeled after tasklets. Just writing a 0 does not have
the desired effect on SPARC64.
|
| |
|
|
|
|
| |
For debugging, make relinking optimizations available in the TRACE() log.
|
|
|
|
| |
The plugins don't care about best-effort tasks. Don't bother them.
|
|
|
|
|
| |
This caused LITMUS^RT real-time tasks to be missed, which
in turn caused all kinds of inconsistent state.
|
|
|
|
| |
Make CPU state available to gdb.
|
|
|
|
| |
Blocking and preemptions must take precedence over forced job completions.
|
|
|
|
|
| |
This change fixes a race where a job could be executed on more than one
CPU, which to random crashes.
|
|
|
|
| |
The forked task will be a best-effort task.
|
|
|
|
| |
Added one message and improved another.
|
|
|
|
| |
Don't place events in __init functions.
|
|
|
|
|
| |
We can't use tasklets from within the scheduler.
User no_rqlock_work instead.
|
|
|
|
|
| |
Many things can't be done from within the scheduler.
This framework should make it easer to defer them.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
The old version had a significant window for races with
interrupt handlers executing on other CPUs.
|
| | |
|
|/
|
|
|
|
|
|
| |
A proper real-time migration works as follows:
1) transition to best-effort task
2) migrate to target CPU
3) update real-time parameters
4) transition to real-time task
|
|
|
|
|
| |
sched_clock() is offset from litmus_clock(),
jobs were being delayed for long times.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replicates 1a229e67e4cf08a539fc558d017a95ff03705ac5 for sparc64.
This is what happens:
smp_send_reschedule()
-> smp_receive_signal() (smp.c)
~~> IPI
~~> xcall_receive_signal (ultra.S)
~~> soft interrupt (ttable.S)
-> smp_receive_signal_client (smp.c)
-> set_tsk_need_resched()
|
|
|
| |
add LITMUS syscalls in jump table and unistd.h
|
| |
|