| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
| |
Conflicts:
include/litmus/sched_trace.h
include/trace/events/litmus.h
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Conflicts:
include/litmus/rt_param.h
litmus/sched_color.c
|
| |
|
|
|
|
|
|
|
|
|
| |
Conflicts:
include/litmus/rt_param.h
litmus/jobs.c
litmus/sched_color.c
litmus/sched_task_trace.c
|
| |
|
|
|
|
|
|
|
| |
Conflicts:
include/litmus/rt_param.h
litmus/sched_color.c
|
|
|
|
|
| |
These methods haven't been fully tested, but they compile and pass a few
simple tests.
|
|
|
|
|
| |
Apply bitwise operations to quickly check if the set of resources
requested are contained within one group.
|
|
|
|
|
| |
Each fdso in a resource group now points to a single litmus_lock object
which will arbitrate access to each of the fdsos.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tasks can now be PERIODIC or SPORADIC.
PERIODIC tasks do not have their job number incremented
when they wake up and are tardy. PERIODIC jobs must
end with a call to sys_complete_job() to set up their next
release. (Not currently supported by pfair.)
SPORADIC tasks _do_ have their job number incremented when
they wake up and are tardy. SPORADIC is the default task
behavior, carrying forward Litmus's current behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows a task to request early releasing
via rt_task parameters to sys_set_task_rt_param().
Note that early releasing can easily peg your CPUs
since early-releasing tasks never suspend to wait for
their next job. As such, early releasing is really
only useful in the context of implementing bandwidth
servers, interrupt handling threads (or any thread that
spends most of its time waiting for an event), or
short-lived computations. If early releasing pegs your
CPUs, then you probably shouldn't be using it.
|
|
|
|
|
|
|
|
| |
cedf_admit_task() is too restrictive in that the task
to be admitted must be executing on the same CPU as
is set in the task's task_params. This patch allows
the task to be admitted to be executing on the same
cluter as the CPU set in the task's task_params.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch creates a new character device, uncachedev.
Pages of RAM allocated by this device are not cached by
CPUs.
Uses for such pages:
1) Determining *very* pessimistic emperical worst-
case execution times.
2) Compare against performance with caches (quantify
the avg. case benefit).
3) Deterministic memory accesses (access cannot cause a
cache eviction.)
4) Theoretically, increased performance can be achieved
by storing infrequently accessed data in uncache pages.
uncachedev allocates pages with the pgprot_noncached() page
attribute for user applications. Since pages allocated by
uncachedev are not locked in memory by default, applications
with any access level may mmap pages with uncachedev.
Limitations:
1) Uncache pages must be MAP_PRIVATE.
2) Remapping not supported.
Usage (user level):
int size = NR_PAGES*PAGE_SIZE;
int fd = open("/dev/litmus/uncache", O_RDWR);
char *data = mmap(NULL, size, PROT_READ | PROT_WRITE,
MAP_PRIVATE, fd, 0);
<...do stuff...>
munmap(data, size);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nesting of locks was never supported in LITMUS^RT since
the required analysis does not exist anyway. That is, as
defined in the literature, the protocols implemented
in LITMUS^RT have not been studied in conjunction with
nested critical sections.
In LITMUS^RT, attempting to nest locks could lead to
silent or not-so-silent bugs. This patch makes this
restriction explicit and returns EBUSY when a process
attempts to nest resources.
This is enforced on a protocol-by-protocol basis,
which means that adding protocols with support for
nesting in future versions is not affected by this
change.
Exception: PCP and SRP resources may be nested,
but not within global critical sections.
|
|
|
|
|
| |
When the config parameter is NULL, just default to the local CPU,
which is what we want in 99% of the cases anyway.
|
|
|
|
|
|
| |
Linux tracepoints for real time tasks were causing
compilation errors. For the tracing data structures to be
properly created, it's necessary to add a #define.
|
|
|
|
|
|
|
| |
In some cases, the PCP priority inheritance code triggered a
(defensive) BUG_ON() in fp_common.c. This shuffles the order of
operations a bit to be compliant with the restriction that tasks are
never compared against themselves.
|
|
|
|
| |
Prevents out-of-bounds lookups.
|
|
|
|
|
|
| |
stop_machine() does exactly what we want (avoid all concurrent
scheduling activity) and much simpler than rolling our own (buggy)
implementation.
|
|
|
|
|
|
|
| |
The list is concurrently being modified by the waking processes. This
requires the use of the list_for_each_safe() iterator.
Reported by Glenn Elliott.
|
|
|
|
|
|
|
|
|
|
| |
This patch makes CONFIG_PREEMPT_STATE_TRACE depend on
CONFIG_DEBUG_KERNEL. Prior to this patch, selecting PREEMPT_STATE_TRACE
resulted in linker errors (see below), because sched_state_name is not
built unless DEBUG_KERNEL is selected.
kernel/built-in.o: In function `schedule':
(.sched.text+0x3d2): undefined reference to `sched_state_name'
|
|
|
|
|
|
|
|
| |
This patch causes reboot notifications to be send
to Litmus. With this patch, Litmus attempts to
switch back to the Linux-plugin before the reboot
proceeds. Any failures to switch back are reported
via printk() (the reboot is not halted).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch removes the RT_F_EXIT_SEM flag. All code paths
depending on it being true are assumed to be unreachable
and removed.
The 'flags' field in struct rt_params is left as-is for
use by specific schedulers. For example, sched_pfair
defines a custom flag RT_F_REQUEUE within the 'flags'
field.
Signed-off-by: Manohar Vanga <mvanga@mpi-sws.org>
|
|
|
|
| |
Signed-off-by: Manohar Vanga <mvanga@mpi-sws.org>
|
|
|
|
|
|
|
|
| |
This patch removes the flags RT_F_SLEEP and RT_F_RUNNING
as their name is misleading. This patch replaces them with
a 'completed' field in struct rt_param.
Signed-off-by: Manohar Vanga <mvanga@mpi-sws.org>
|
|
|
|
|
|
|
| |
This patch fixes a warning about an unused label in sched_pfp.c
when CONFIG_LITMUS_LOCKING is not set.
Signed-off-by: Manohar Vanga <mvanga@mpi-sws.org>
|
|
|
|
| |
No suspended task should ever be queued in this plugin.
|
|
|
|
|
| |
Crash and burn if an expected preemption didn't happen. This is useful
to flag any bugs in the queue management code...
|
|
|
|
| |
It's CONFIG_LITMUS_LOCKING, not just CONFIG_LOCKING...
|
|
|
|
|
|
| |
Make the priority comparison easier to read. Also, remove the "equal
PID" clause and insert a corresponding BUG_ON() instead; this should
really never happen.
|
|
|
|
|
|
| |
boost_priority() is only applied to already-scheduled tasks. Remove
the (untested and unneeded) case handling unscheduled tasks, which was
likely not correct anyway.
|
|
|
|
|
| |
When a job was tardy, the plugin failed to invoke sched_trace. This
caused ugly "holes" in the visualized schedule.
|
|
|
|
|
|
|
| |
The SEND_RESCHED is really only interesting if the IPI was generated
by LITMUS^RT. Therefore, we don't need to trace in Linux's
architecture-specific code. Instead, we hook into the preemption state
machine, which is informed about incoming IPIs anyway.
|
|
|
|
|
|
|
|
|
|
| |
Linux's post_schedule() scheduling class hook more closely matches
what SCHED2 is supposed to trace, namely any scheduling overhead after
the context switch. The prior trace points caught timers being armed
from finish_switch(), which is already included in the context switch
cost CXS.
(This patch essentially reverts 8fe2fb8bb1c1cd0194608bc783d0ce7029e8d869).
|
|
|
|
|
|
|
| |
SEND_RESCHED_END is necessarily preceded by an interrupt. We don't
want to filter events based on this expected interrupts, but we still
want to detect samples disturbed by other interrupts. Hence, subtract
one off the interrupt count.
|
|
|
|
|
|
|
|
|
|
|
| |
If the tracing code is interrupted / preempted inbetween the time that
a sequence number is drawn and the time that the trace recorded is
allocated, then the trace file will contain "out of order" events.
These are difficult to detect during post-processing and can create
artificial "outliers". This patch briefly disables local interrutps to
avoid this.
While at it, de-duplicate the timestamp recording code.
|
|
|
|
|
|
| |
To detect interrupts that interfered after the initial time stamp was
recorded, this patch changes sched_trace to also record the IRQ count
as observed by userspace.
|
|
|
|
| |
Also add some compile-time checks to detect unexpected offsets.
|
|
|
|
|
| |
Support recording timestamps that allow tracing the entry and exit
costs of locking-related system calls.
|
|
|
|
|
| |
Reassing locking timestamps and prepare support for tracing
system call overheads.
|