| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
| |
This patch updates the R2DGLP implementation to use
the sbinheap data structure in cases where the max
heap size is known.
|
| |
|
|
|
|
|
| |
This patch updates C-EDF to use sbinheap instead of
binheap to track the m-highest priority released jobs.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces the 'sbinheap' binary heap data structure.
Litmus's binheap data structure is inefficient when we can put
a reasonable upperbound on maximum heap size (ex. When max size
is equal to NR_CPUS). When we can put an upperbound on heap size,
then we may statically allocate (or at least kmalloc needed memory)
for the heap at compile/initialization-time. With this contiguous
buffer, we can do away with left/right/parent pointers through
standard binary-tree indexing. This also speeds up heap insert/delete
operations.
|
|
|
|
|
| |
This patch updates various parts of litmus that
utilize binheap and bheaps to use const operands.
|
|
|
|
|
|
| |
This patch changes bheaps compare function pointer
to take const parameters. Compare functions should never
modify the contents of their operands.
|
|
|
|
| |
This patch makes slight optimizations to binheap.
|
| |
|
|
|
|
| |
Add mb() to recursive spinlock on the already-held code path.
|
|
|
|
|
|
|
|
| |
Division in fpmath.h is implemented using kernel-only
functions. This causes compilation errors when fpmath.h
is used from user code. This patch modifies fpmath.h to
use standard division (i.e., the '/' operator) when fpmath.h
is included in non-kernel (i.e., userspace) code.
|
|
|
|
|
|
| |
This patch cleans up the sync-release code used by
GPUSync. The updated code leverage's Litmus's
new per-plugin wait_for_release_at().
|
|
|
|
|
| |
This patch fixes a compilation bug that occurs
when nested locking is disabled.
|
| |
|
|
|
|
|
|
|
|
| |
The I-KGLP was a pre-publish name for the R^2DGLP locking protocol
developed by B. Ward et al. ("Replica-Request Priority Donation:
A Real-Time Progress Mechanism for Global Locking Protocols"
presented at RTCSA 2012). This patch renames ikglp-named identifiers
to r2dglp-named identifiers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes to C-EDF to support GPUSync are numerous and
interdependent. Unfortunatly, things are a bit too complex
(given time constraints) to break GPUSync integration
into C-EDF into smaller parts. This patch is just a dump
of prior development in one big chunk. Changes include:
1) Support for three different budget tracking/enforcement policies
2) Nested priority inheritance
3) Support for klmirqd
4) Support for auxiliary tasks
5) Several locking protocols: FIFO, PRIOQ, IKGLP, and KFMLP
6) GPU-to-CPU-cluster mapping
|
|
|
|
|
| |
This patch adds calls to trace tasklet/workqueue
releases, begin-execution, and end-execution events.
|
|
|
|
|
| |
Adds job backlog information to Litmus's litmus_task_completion
kernel shark trace event.
|
|
|
|
|
| |
Adds Litmus kernel shark trace event to record changes in
task scheduling priority.
|
|
|
|
|
|
| |
Extends Litmus's litmus_task_block kernel shark
tracing event to include information about blocking
for I/O (versus blocking for a Litmus lock).
|
|
|
|
|
|
|
|
|
| |
This patch modifies lockdep to support nested locking
protocols--specifically allow keys allocated in dynamic
memory. We need to allow spinlocks to be acquired in a
nested fashion without triggering complaints from
lockdep. This is needed to support chained/transitive
priority inheritance.
|
|
|
|
|
| |
Add macros for period comparisons. Useful for
implementing rate-monotonic schedulers.
|
|
|
|
|
|
|
| |
This patch allows (at compile-time) for schedulers to use
recurive locks on the ready queue. Recursive locks are useful
for implementing complicated budget enforcement policies where
locking protocol priority inheritance is concerned.
|
|
|
|
|
| |
A few definitions were missing or misplaced. This patch
puts them in the right places.
|
| |
|
|
|
|
|
|
|
|
|
| |
This patch forces auxiliary and klmirqd tasks to run with
a priority statically below any normal real-time tasks
when the auxiliary or klmirqd thread is not inheriting
a priority. This allows auxiliary and klmirqd threads to
still do work in the background when no real-time threads
are waiting on them to finish work (better throughput).
|
|
|
|
|
|
|
|
| |
This patch updates litmus.c to integrate prior GPUSync
patches. Fixes include:
1) klmirqd-aware plugin switching
2) sched_trace event injection from user space
3) proper (re-)initialization of GPUSync data structures in task_struct
|
|
|
|
|
| |
Earlier patches forgot to update the arch-specific syscall
tables. That is done here.
|
|
|
|
|
|
|
|
| |
This patch adds more feature-rich budget tracking/enforcement
features. Budget tracking is now controlled by a state machine.
Each task can elect to use a different budget policy. Hooks
are in place to implement bandwidth inheritance (BWI) and
virtually exclusive resources.
|
|
|
|
|
|
|
|
|
|
| |
Auxiliary tasks helper threads to real-time tasks. These helper
tasks may inherit the priority of the real-time task(s) in the
process only if that real-time task is blocked/suspended (and it
is not suspending because the job has completed). Otherwise,
these threads are scheduled with a default priority greater than
normal Linux threads, but lower than any other real-time task, including
klmirqd threads.
|
|
|
|
|
|
|
|
|
|
| |
This patch adds klmirqd, which can be used to implement
real-time threaded interrupt handling and workqueue
handling. Patch includes added hooks into Linux's
interrrupt/tasklet/workqueue implementations.
This patch also includes interrupt handling and work
queue handling for GPU tasklets and work_structs.
|
|
|
|
|
|
| |
Adds GPU affinity tracking/prediction routines. Tracking
uses a process-chart method whereby observations over two
standard deviations below or above the average are discarded.
|
|
|
|
|
|
|
| |
Adds two major components to GPU management (for NVIDIA
GPUs): (1) Routines to extract device ID from NVIDIA tasklet
private data. (2) Routines for tracking ownership/assignment
of tasks to GPUs.
|
|
|
|
|
| |
Add the fdso identifiers needed for userspace to create
instances of the fifo, prioq, kfmlp, and ikglp semaphores.
|
|
|
|
|
|
| |
Adds an implementation of the ikglp. Supports both affinity and
nesting of other locks within the ikglp (the ikglp itself must
be an outer-most lock).
|
|
|
|
|
| |
Adds an implementation of the kfmlp. Supports affinity, but
not nesting.
|
|
|
|
|
|
| |
Adds interfaces for k-exclusion affinity
observers. Observers are created in the
same manner as locks (via FDSO operations).
|
|
|
|
|
| |
This patch adds a FIFO and PRIOQ mutex. These locks
support nesting and DGLs.
|
|
|
|
|
|
|
| |
Extends the core parts of locking protocols in Litmus.
* Adds dynamic group lock interface (internal and sys calls).
* Adds helper functions for waking tasks.
* Adds tracing for these new features.
|
|
|
|
|
|
|
|
| |
Patch adds new interfaces to the sched_plugin struct/API.
New interfaces expose task prioritization and inheritance
routines of a scheduler to external components. For example,
a single locking protocol implementation can support both FIFO
and EDF prioritization via this API ("litmus->compare(a, b)").
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add implementation of recursive spinlocks. Allows
the same CPU to take, and retake, the same spinlock.
The recursive spinlock still counts the number of times
it's been taken, so every lock() must still be matched
with an unlock().
Although ugly, recursive spinlocks make nested priority
inheritance propagation (and esp. nested 'un-'inheritance
under bandwidth inheritance) much easier to implement.
|
|
|
|
|
|
|
|
| |
Add method to apply a function to every node in
a binary heap. Beware that the function is recursive,
so the function should only be called when the heap
is known to be limited in size. Otherwise, there is a
danger of running out of stack space.
|
|
|
|
|
|
|
| |
Extend bheap to apply a generic function to all nodes in a bheap.
Beware that the function is recursive, so this should only be
called in cases where the bheap is known to be limited in size.
Otherwise, the stack could get overrun.
|
|
|
|
|
|
| |
This patch adds the plugin interface "get_domain_proc_info()".
Plugins use this function to report their cpu/clustering configuration
to the user via /proc/litmus/domains and /proc/litmus/cpus.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a framework by which plugins can
export information about CPU <-> cluster (aka
scheduling domain) per cluster.
/proc/litmus/domains/<domain#>: This file contains a CPU
mask describing the CPUs scheduled/managed by that domain.
Files are named by index. For example, the first scheduling
domain would be '0'.
/proc/litmus/cpus/<cpus#>: This file contains a domain
mask describing which domains manage this CPU. Normally,
only one bit will be set in this mask, but overlapping clusters
can also be expressed by setting multiple bits.
|
|
|
|
|
|
| |
The flag was a hack that never really worked. It shortened the race
window, but didn't close it. With the new sporadic release support, it
should now be superfluous.
|
|
|
|
|
|
|
| |
The way the synchronous release code used to set up sporadic releases
is racy. This patch changes the implementation to postpone the
fiddling with the ->release parameter until prepare_for_next_period(),
which is called with the appropriate plugin-specific locks held.
|
|
|
|
|
|
| |
This reduces the number of ways in which a real-time task can exit to
one (via sched_setscheduler), which makes it easier to avoid nasty
corner cases.
|
| |
|