| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adapt to new schema for spinlock:
(tglx 20091217)
spinlock - the weakest one, which might sleep in RT
raw_spinlock - spinlock which always spins even on RT
arch_spinlock - the hardware level architecture dependent implementation
----
Most probably, all the spinlocks changed by this commit will be true
spinning lock (raw_spinlock) in PreemptRT (so hopefully we'll need few
changes when porting Litmmus to PreemptRT).
There are a couple of spinlock that the kernel still defines as
spinlock_t (therefore no changes reported in this commit) that might cause
us troubles:
- wait_queue_t lock is defined as spinlock_t; it is used in:
* fmlp.c -- sem->wait.lock
* sync.c -- ts_release.wait.lock
- rwlock_t used in fifo implementation in sched_trace.c
* this need probably to be changed to something always spinning in RT
at the expense of increased locking time.
----
This commit also fixes warnings and errors due to the need to include
slab.h when using kmalloc() and friends.
----
This commit does not compile.
|
|
|
|
|
|
| |
Improved C-EDF plugin. C-EDF now supports different cluster sizes (based
on L2 and L3 cache sharing) and supports dynamic changes of cluster size
(this requires reloading the plugin).
|
|
|
|
|
|
|
|
| |
1) High priority task tied to FMLP semaphore in P-EDF scheduling is
incorrectly tracked for tasks acquiring the lock without
contention. (HP is always set to CPU 0 instead of proper CPU.)
2) Race in a print statement from P-EDF's pi_block() causes NULL
pointer dereference.
|
|
|
|
|
|
|
|
| |
Dealing with preemptions across CPUs in the presence of non-preemptive
sections can be tricky and should not be replicated across (event-driven) plugins.
This patch introduces a generic preemption function that handles
non-preemptive sections (hopefully) correctly.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-introduce NP sections in the configuration and in litmus.h. Remove the old
np_flag from rt_param.
If CONFIG_NP_SECTION is disabled, then all non-preemptive section checks are
constant expressions which should get removed by the dead code elimination
during optimization.
Instead of re-implementing sys_exit_np(), we simply repurposed sched_yield()
for calling into the scheduler to trigger delayed preemptions.
|
|
|
|
|
|
|
|
|
| |
This device only supports mmap()'ing a single page.
This page is shared RW between the kernel and userspace.
It is inteded to allow near-zero-overhead communication
between the kernel and userspace. It's first use will be a
proper implementation of user-signaled
non-preemptable section support.
|
| |
|
|
|
|
|
| |
- [ported from 2008.3] Add x86_32 architecture dependent code.
- Add the infrastructure for x86_32 - x86_64 integration.
|
|
|
|
|
|
|
| |
- Add syscall on x86_64
- Refactor __NR_sleep_next_period -> __NR_complete_job
for both x86_32 and x86_64
|
|
|
|
|
|
| |
- Binomial heap "heap" names conflicted with priority heap
of cgroup in kernel
- This patch change binomial heap "heap" names in "bheap"
|
| |
|
| |
|
|
|
|
| |
infrastructure
|
| |
|
|
|
|
|
| |
Still to be merged:
- arm_release_timer() with no rq locking
|
|
|
|
|
|
|
|
| |
- fix requesting more than 2^11 pages (MAX_ORDER)
to system allocator
Still to be merged:
- feather-trace generic implementation
|
| |
|
|
Port 2008.3 Core LITMUS^RT infrastructure to Linux 2.6.32
litmus_sched_class implements 4 new methods:
- prio_changed:
void
- switched_to:
void
- get_rr_interval:
return infinity (i.e., 0)
- select_task_rq:
return current cpu
|