| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
| |
1) Deadlock in litmus_task_exit()-- added litmus_pre_task_exit()
to be called without the Linux runqueue lock held.
2) Prioritization of base-prio klmirqd/aux threads vs. normal
real-time tasks.
3) Initialization of gpu owner binheap node moved to *after*
memset(0) of rt_params.
4) Exit path of klmirqd threads.
|
|
|
|
|
|
| |
Note that num_online_gpus() merely reports the
staticly configured maximum number of available
GPUs. Will make dynamic in the future.
|
| |
|
|
|
|
| |
this code is untested!
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes two bugs with nested locks:
1) List of aux threads could become corrupted.
-- moved modifications to be within scheduler lock.
2) Fixed bad EDF comparison ordering that could lead
to schedule thrashing in an infinite loop.
3) Prevent aux threads from inheriting a priority from
a task that is blocked on a real-time litmus lock.
(since the aux threads can't possibly hold these locks,
we don't have to worry about inheritance.)
|
|
|
|
|
|
| |
Auxillary task features were enabled by CONFIG_LITMUS_LOCKING.
Made auxillary tasks a seperate feature that depends upon
CONFIG_LITMUS_LOCKING.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
Conflicts:
include/litmus/unistd_32.h
include/litmus/unistd_64.h
litmus/litmus.c
|
| | |
|
| |\
| | |
| | |
| | |
| | | |
Conflicts:
litmus/sched_gsn_edf.c
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Added signals to Litmus. Specifcally, SIG_BUDGET signals
are delivered (when requested by real-time tasks) when
a budget is exceeded.
Note: pfair not currently supported (but it probably could be).
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
include/litmus/binheap.h
include/litmus/fdso.h
include/litmus/litmus.h
litmus/Makefile
litmus/binheap.c
litmus/edf_common.c
litmus/fdso.c
litmus/jobs.c
litmus/locking.c
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Instead of tie-breaking by PID (which is a static
priority tie-break), we can tie-break by other
job-level-unique parameters. This is desirable
because tasks are equaly affected by tardiness
since static priority tie-breaks cause tasks
with greater PID values to experience the most
tardiness.
There are four tie-break methods:
1) Lateness. If two jobs, J_{1,i} and J_{2,j} of
tasks T_1 and T_2, respectively, have equal
deadlines, we favor the job of the task that
had the worst lateness for jobs J_{1,i-1} and
J_{2,j-1}.
Note: Unlike tardiness, lateness may be less than
zero. This occurs when a job finishes before its
deadline.
2) Normalized Lateness. The same as #1, except
lateness is first normalized by each task's
relative deadline. This prevents tasks with short
relative deadlines and small execution requirements
from always losing tie-breaks.
3) Hash. The job tuple (PID, Job#) is used to
generate a hash. Hash values are then compared.
A job has ~50% chance of winning a tie-break
with respect to another job.
Note: Emperical testing shows that some jobs
can have +/- ~1.5% advantage in tie-breaks.
Linux's built-in hash function is not totally
a uniform hash.
4) PIDs. PID-based tie-break used in prior
versions of Litmus.
Conflicts:
litmus/edf_common.c
|
| | | |
|
| | | |
|
| | | |
|
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added support for arbitrary deadlines.
Constraint: Relative deadline must be >= exec cost.
Use: Set relative deadline in rt_task::rdeadline. Set value to 0
to default to implicit deadlines.
Limitations: PFAIR not supported by this patch. PFAIR updated to
reject tasks that do not have implicit deadlines.
|
| |
| |
| |
| |
| |
| | |
Add a comment to explain how priorities are
interpreted, and provide some useful macros for
userspace.
|
| |
| |
| |
| | |
Prior to that it was only used internally for DPCP
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
dissertation (branch bbb-diss) to current
version of litmus
This is needed for ongoing projects
I took the unchanged code but removed some leftovers
of OMLP which is not implemented
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch changes how preemptions of jobs without
budget work. Instead of requeuing them, they are now
only added if they are not subject to budget enforcement
or if they have non-zero budget. This allows us to process
job completions that race with preemptions.
This appears to fix a BUG in budget.c:65 reported by Giovani Gracioli.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
litmus.h is accumulating too many things. Since
we already have budget.h, let's stick all budget-related
inline functions there as well.
This patch is merely cosmetic; it does not change
how budget enforcement works.
|
| |
| |
| |
| |
| |
| |
| | |
An efficient binary heap implementation coded in the
style of Linux's list. This binary heap should be able
to replace any partially sorted priority queue based
upon Linux's list.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable kernel-style events (tracepoint) for Litmus. Litmus events
trace the same functions as the sched_trace_XXX(), but can be
enabled independently.
So, why another tracing infrastructure then:
- Litmus tracepoints can be recorded and analyzed together (single
time reference) with all other kernel tracing events (e.g.,
sched:sched_switch, etc.). It's easier to correlate the effects
of kernel events on litmus tasks.
- It enables a quick way to visualize and process schedule traces
using trace-cmd utility and kernelshark visualizer.
Kernelshark lacks unit-trace's schedule-correctness checks, but
it enables a fast view of schedule traces and it has several
filtering options (for all kernel events, not only Litmus').
|
| |
|
|
|
|
|
|
|
| |
Increment a processor-local counter whenever an interrupt is handled.
This allows Feather-Trace to include a (truncated) counter and a flag
to report interference from interrupts. This could be used to filter
samples that were disturbed by interrupts.
|
|
|
|
|
|
| |
User a 32-bit word for all non-preemptive section flags.
Set the "please yield soon" flag atomically when
accessing it on remotely-scheduled tasks.
|
|
|
|
|
| |
This allows us to splice in information into logs from events
that were recorded in userspace.
|
| |
|
| |
|
|
|
|
|
|
| |
Preemption state tracing is only useful when debugging preemption-
and IPI-related races. Since it creates a lot of clutter in the logs,
this patch turns it off unless explicitly requested.
|
|
|
|
|
|
|
|
|
|
| |
Needed to update C-EDF to handle release master. Also
updated get_nearest_available_cpu() to take NO_CPU instead
of -1 to indicate that there is no release master. While
NO_CPU is 0xffffffff (-1 in two's complement), we still
translate this value to -1 in case NO_CPU changes.
Signed-off-by: Andrea Bastoni <bastoni@cs.unc.edu>
|
|
|
|
|
| |
Original comment said that this feature wasn't supported,
though it has been since around October 2010.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given a choice between several available CPUs (unlinked) on which
to schedule a task, let the scheduler select the CPU closest to
where that task was previously scheduled. Hopefully, this will
reduce cache migration penalties.
Notes: SCHED_CPU_AFFINITY is dependent upon x86 (only x86 is
supported at this time). Also PFair/PD^2 does not make use of
this feature.
Signed-off-by: Andrea Bastoni <bastoni@cs.unc.edu>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some notes:
* Litmus^RT scheduling class is the topmost scheduling class
(above stop_sched_class).
* scheduler_ipi() function (e.g., in smp_reschedule_interrupt())
may increase IPI latencies.
* Added path into schedule() to quickly re-evaluate scheduling
decision without becoming preemptive again. This used to be
a standard path before the removal of BKL.
Conflicts:
Makefile
arch/arm/kernel/calls.S
arch/arm/kernel/smp.c
arch/x86/include/asm/unistd_32.h
arch/x86/kernel/smp.c
arch/x86/kernel/syscall_table_32.S
include/linux/hrtimer.h
kernel/printk.c
kernel/sched.c
kernel/sched_fair.c
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
signal: align __lock_task_sighand() irq disabling and RCU
softirq,rcu: Inform RCU of irq_exit() activity
sched: Add irq_{enter,exit}() to scheduler_ipi()
rcu: protect __rcu_read_unlock() against scheduler-using irq handlers
rcu: Streamline code produced by __rcu_read_unlock()
rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special
rcu: decrease rcu_report_exp_rnp coupling with scheduler
|
| | |\
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu into core/urgent
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The RCU_BOOST commits for TREE_PREEMPT_RCU introduced an other-task
write to a new RCU_READ_UNLOCK_BOOSTED bit in the task_struct structure's
->rcu_read_unlock_special field, but, as noted by Steven Rostedt, without
correctly synchronizing all accesses to ->rcu_read_unlock_special.
This could result in bits in ->rcu_read_unlock_special being spuriously
set and cleared due to conflicting accesses, which in turn could result
in deadlocks between the rcu_node structure's ->lock and the scheduler's
rq and pi locks. These deadlocks would result from RCU incorrectly
believing that the just-ended RCU read-side critical section had been
preempted and/or boosted. If that RCU read-side critical section was
executed with either rq or pi locks held, RCU's ensuing (incorrect)
calls to the scheduler would cause the scheduler to attempt to once
again acquire the rq and pi locks, resulting in deadlock. More complex
deadlock cycles are also possible, involving multiple rq and pi locks
as well as locks from multiple rcu_node structures.
This commit fixes synchronization by creating ->rcu_boosted field in
task_struct that is accessed and modified only when holding the ->lock
in the rcu_node structure on which the task is queued (on that rcu_node
structure's ->blkd_tasks list). This results in tasks accessing only
their own current->rcu_read_unlock_special fields, making unsynchronized
access once again legal, and keeping the rcu_read_unlock() fastpath free
of atomic instructions and memory barriers.
The reason that the rcu_read_unlock() fastpath does not need to access
the new current->rcu_boosted field is that this new field cannot
be non-zero unless the RCU_READ_UNLOCK_BLOCKED bit is set in the
current->rcu_read_unlock_special field. Therefore, rcu_read_unlock()
need only test current->rcu_read_unlock_special: if that is zero, then
current->rcu_boosted must also be zero.
This bug does not affect TINY_PREEMPT_RCU because this implementation
of RCU accesses current->rcu_read_unlock_special with irqs disabled,
thus preventing races on the !SMP systems that TINY_PREEMPT_RCU runs on.
Maybe-reported-by: Dave Jones <davej@redhat.com>
Maybe-reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Allow for sched_domain spans that overlap by giving such domains their
own sched_group list instead of sharing the sched_groups amongst
each-other.
This is needed for machines with more than 16 nodes, because
sched_domain_node_span() will generate a node mask from the
16 nearest nodes without regard if these masks have any overlap.
Currently sched_domains have a sched_group that maps to their child
sched_domain span, and since there is no overlap we share the
sched_group between the sched_domains of the various CPUs. If however
there is overlap, we would need to link the sched_group list in
different ways for each cpu, and hence sharing isn't possible.
In order to solve this, allocate private sched_groups for each CPU's
sched_domain but have the sched_groups share a sched_group_power
structure such that we can uniquely track the power.
Reported-and-tested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-08bxqw9wis3qti9u5inifh3y@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In order to prepare for non-unique sched_groups per domain, we need to
carry the cpu_power elsewhere, so put a level of indirection in.
Reported-and-tested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-qkho2byuhe4482fuknss40ad@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`make headers_check` complains that
linux-2.6/usr/include/linux/sdla.h:116: userspace cannot reference
function or variable defined in the kernel
this is due to that there is no such a kernel function,
void sdla(void *cfg_info, char *dev, struct frad_conf *conf, int quiet);
I don't know why we have it in a kernel header, so remove it.
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
Bluetooth: Fix crash with incoming L2CAP connections
Bluetooth: Fix regression in L2CAP connection procedure
gianfar: rx parser
r6040: only disable RX interrupt if napi_schedule_prep is successful
net: remove NETIF_F_ALL_TX_OFFLOADS
net: sctp: fix checksum marking for outgoing packets
|