| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
Kludge the expected graph end-to-end response time
into record data to make data analysis easier.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This patch adds a C-FL scheduler plugin. Original work
by Jeremy Erikson, port to latest Litmus by Namhoon Kim,
and cleanup and commit by Glenn Elliott.
|
|
|
|
|
|
|
| |
This patch boosts the priority of PGM producers while
they are sending tokens instead of boosting the priority
of consumers while they are waiting for tokens. This improves
schedulability analysis.
|
|
|
|
|
|
| |
Patch adds tracing of job release/deadline adjustments
of PGM tasks. Tracing is separate from regular job tracing
so that we many observe the magnitude of adjustments/slippage.
|
|
|
|
|
|
| |
Adds code that adjusts a jobs release and deadline
according to when the job receives the necessary
PGM tokens.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, PGM token satisfaction is tracked in
userspace. However, Litmus needs to be aware of
when a PGM task is waiting for tokens in order to
avoid unbounded priority inversions. The state of
a PGM task is communicated to Litmus via the
control page.
We should be able to remove control page variables
for PGM if/when token tracking is moved from
userspace into the kernel.
|
|
|
|
|
|
| |
This patch adds the plugin interface "get_domain_proc_info()".
Plugins use this function to report their cpu/clustering configuration
to the user via /proc/litmus/domains and /proc/litmus/cpus.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a framework by which plugins can
export information about CPU <-> cluster (aka
scheduling domain) per cluster.
/proc/litmus/domains/<domain#>: This file contains a CPU
mask describing the CPUs scheduled/managed by that domain.
Files are named by index. For example, the first scheduling
domain would be '0'.
/proc/litmus/cpus/<cpus#>: This file contains a domain
mask describing which domains manage this CPU. Normally,
only one bit will be set in this mask, but overlapping clusters
can also be expressed by setting multiple bits.
|
|
|
|
| |
Signed-off-by: Roy Spliet <rspliet@mpi-sws.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the core of LITMUS^RT:
- library functionality (heaps, rt_domain, prioritization, etc.)
- budget enforcement logic
- job management
- system call backends
- virtual devices (control page, etc.)
- scheduler plugin API (and dummy plugin)
This code compiles, but is not yet integrated with the rest of Linux.
|
|
|
|
|
|
|
| |
This patch integrates LITMUS^RT's sched_trace_XXX() macros with
Linux's notion of tracepoints. This is useful to visualize schedules
in kernel shark and similar tools. Historically, LITMUS^RT's
sched_trace predates Linux's tracepoint infrastructure.
|
|
|
|
|
|
| |
This patch introduces the sched_trace infrastructure, which in
principle allows tracing the generated schedule. However, this patch
does not yet integrate the callbacks with the kernel.
|
|
|
|
| |
This patch adds support for tracing release latency in LITMUS^RT.
|
|
|
|
|
| |
This patch adds a basic litmus/litmus.h, which is required for basic
LITMUS^RT infrastructure to compile.
|
|
|
|
| |
This patch adds the PCB extensions required for LITMUS^RT.
|
|
|
|
| |
This patch adds the infrastructure for the TRACE() debug macro.
|
|
|
|
|
| |
This patch hooks up Feather-Trace's ft_irq_fired() handler with
Linux's interrupt handling infrastructure.
|
|
|
|
|
| |
This patch adds the main infrastructure for tracing overheads in
LITMUS^RT. It does not yet introduce any tracepoints into the kernel.
|
|
|
|
|
| |
This patch adds the ftdev device driver, which is used to export
samples collected with Feather-Trace to userspace.
|
|
This patch adds the simple fallback implementation and creates dummy
hooks in the x86 and ARM Kconfig files.
|