aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
Commit message (Collapse)AuthorAge
* Merge remote-tracking branch 'litmus-rt/wip-mc' into odroidx-colorChristopher Kenna2012-10-23
|\
| * Output additional data in MC completion records.Jonathan Herman2012-10-23
| |
* | Merge Jonathan's wip-mc into odroidx-color.Christopher Kenna2012-09-30
|\| | | | | | | | | | | | | Pull in the cache coloring stuff with the latest changes. Conflicts: litmus/sched_task_trace.c
| * Fixed sched_color run issues.Jonathan Herman2012-09-30
| |\
| * | Resolved merge bugs.Jonathan Herman2012-09-29
| | |
| * | Merge branch 'wip-color' into wip-mcJonathan Herman2012-09-29
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: include/litmus/budget.h include/litmus/litmus.h include/litmus/rt_param.h include/litmus/sched_trace.h include/litmus/trace.h include/trace/events/litmus.h litmus/Makefile litmus/budget.c litmus/ftdev.c litmus/jobs.c litmus/litmus.c litmus/locking.c litmus/preempt.c litmus/rt_domain.c litmus/sched_gsn_edf.c litmus/trace.c
| | * | Added color scheduleJonathan Herman2012-05-03
| | | |
| * | | Add kernel-style events for sched_trace_XXX() functionsAndrea Bastoni2012-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable kernel-style events (tracepoint) for Litmus. Litmus events trace the same functions as the sched_trace_XXX(), but can be enabled independently. So, why another tracing infrastructure then: - Litmus tracepoints can be recorded and analyzed together (single time reference) with all other kernel tracing events (e.g., sched:sched_switch, etc.). It's easier to correlate the effects of kernel events on litmus tasks. - It enables a quick way to visualize and process schedule traces using trace-cmd utility and kernelshark visualizer. Kernelshark lacks unit-trace's schedule-correctness checks, but it enables a fast view of schedule traces and it has several filtering options (for all kernel events, not only Litmus').
| * | | Sperated b and c overheadsJonathan Herman2011-12-27
| | | |
| * | | Merge branch 'wip-mc' of ↵Jonathan Herman2011-10-13
| |\ \ \ | | | | | | | | | | | | | | | ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/litmus2010 into wip-mc
| | * | | add level-A scheduling eventChristopher Kenna2011-10-11
| | | | |
| * | | | Fixed very obscure race conditionJonathan Herman2011-10-13
| |/ / /
| * | | Adding events is now accomplished in O(1) timeJonathan Herman2011-10-10
| | | |
| * | | Fixed a timer bug and implemented finish switch.Jonathan Herman2011-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for cancelling a remote timer allowed timers to be enqueued twice in the pull list. A method hrtimer_pull_cancel() was added to hrtimer.c to cancel the timers properly. Finish-switch was added to fix a stack bug. This required a change to the global_preempt check in update_crit_levels
| * | | Allow scheduler to 'logically' remove crit_entry's from level-C heap.Jonathan Herman2011-10-08
| | | | | | | | | | | | | | | | | | | | This required fixes to hrtimer_start_on code so that events can be cancelled or re-armed while an hrtimer pull is in progress.
| * | | Fixed timer issue and atomic remove issue in level A domain.Jonathan Herman2011-10-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Timers had an issue where they couldn't be cancelled before they migrated. Now when you set the start_on_info to inactive, it will prevent a timer from being armed. When a task is being blocked and preempted concurrently, the blocking code needs to be able to prevent the task from being scheduled on another CPU. This did not work for CE domains. Added a per-domain remove function which, for ce domains, will prevent a task from being returned by the domain.
* | | | Merge LITMUS^RT staging (as of time of this commit).Christopher Kenna2012-09-29
|\ \ \ \ | | |_|/ | |/| | | | | | | | | | | | | | Conflicts: Makefile include/linux/sched.h
| * | | Do processor state transitions in schedule_tail().Glenn Elliott2012-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes a bug in Litmus where processor scheduling states could become corrupted. Corruption can occur when a just-forked thread is externally forced to be scheduled by SCHED_LITMUS before this just-forked thread can complete post-fork processing. Specifically, before schedule_tail() has completed.
| * | | Add kernel-style events for sched_trace_XXX() functionsAndrea Bastoni2012-03-30
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable kernel-style events (tracepoint) for Litmus. Litmus events trace the same functions as the sched_trace_XXX(), but can be enabled independently. So, why another tracing infrastructure then: - Litmus tracepoints can be recorded and analyzed together (single time reference) with all other kernel tracing events (e.g., sched:sched_switch, etc.). It's easier to correlate the effects of kernel events on litmus tasks. - It enables a quick way to visualize and process schedule traces using trace-cmd utility and kernelshark visualizer. Kernelshark lacks unit-trace's schedule-correctness checks, but it enables a fast view of schedule traces and it has several filtering options (for all kernel events, not only Litmus').
| * | Prevent Linux to send IPI and queue tasks on remote CPUs.Andrea Bastoni2011-08-27
| | | | | | | | | | | | | | | | | | Whether to send IPIs and enqueue tasks on remote runqueues is plugin-specific. The recent ttwu_queue() mechanism (by calling ttwu_queue_remote()) interferes with Litmus plugin decisions.
| * | Merge 'Linux v3.0' into LitmusAndrea Bastoni2011-08-27
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some notes: * Litmus^RT scheduling class is the topmost scheduling class (above stop_sched_class). * scheduler_ipi() function (e.g., in smp_reschedule_interrupt()) may increase IPI latencies. * Added path into schedule() to quickly re-evaluate scheduling decision without becoming preemptive again. This used to be a standard path before the removal of BKL. Conflicts: Makefile arch/arm/kernel/calls.S arch/arm/kernel/smp.c arch/x86/include/asm/unistd_32.h arch/x86/kernel/smp.c arch/x86/kernel/syscall_table_32.S include/linux/hrtimer.h kernel/printk.c kernel/sched.c kernel/sched_fair.c
| * | Litmus core: replace FMLP & SRP system calls with generic syscallsBjoern B. Brandenburg2011-02-01
| | | | | | | | | | | | | | | This renders the FMLP and SRP unfunctional until they are ported to the new locking API.
| * | bugfix: don't let children stay Litmus real-time tasksBjoern B. Brandenburg2011-01-30
| | | | | | | | | | | | | | | | | | | | | | | | It has always been LITMUS^RT policy that children of real-time tasks may not skip the admissions test, etc. This used to be enforced, but was apparently dropped during some port. This commit re-introduces this policy. This fixes a kernel panic that occurred when "real-time children" exited without proper initilization.
| * | Workaround: do not set rq->skip_clock_updateBjoern B. Brandenburg2010-11-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Disabling the clock update seems to be causing problems even in normal Linux, and causes major bugs under LITMUS^RT. As a workaround, just disable this "optimization" for now. Details: the idle load balancer causes tasks that suspsend to be marked with set_tsk_need_resched(). When such a task resumes, it may wrongly trigger the setting of skip_clock_update. However, a corresponding rescheduling event may not happen immediately, such that the currently-scheduled task is no longer charged for its execution time.
| * | Implement proper remote preemption supportBjoern B. Brandenburg2010-11-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To date, Litmus has just hooked into the smp_send_reschedule() IPI handler and marked tasks as having to reschedule to implement remote preemptions. This was never particularly clean, but so far we got away with it. However, changes in the underlying Linux, and peculartities of the ARM code (interrupts enabled before context switch) break this naive approach. This patch introduces new state-machine based remote preemption support. By examining the local state before calling set_tsk_need_resched(), we avoid confusing the underlying Linux scheduler. Further, this patch avoids sending unncessary IPIs.
| * | hook litmus tick function into hrtimer-driven ticksBjoern B. Brandenburg2010-11-11
| | | | | | | | | | | | | | | Litmus plugins should also be activated if ticks are triggered by hrtimer.
| * | Merge commit 'v2.6.36' into wip-merge-2.6.36Andrea Bastoni2010-10-23
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: Makefile arch/x86/include/asm/unistd_32.h arch/x86/kernel/syscall_table_32.S kernel/sched.c kernel/time/tick-sched.c Relevant API and functions changes (solved in this commit): - (API) .enqueue_task() (enqueue_task_litmus), dequeue_task() (dequeue_task_litmus), [litmus/sched_litmus.c] - (API) .select_task_rq() (select_task_rq_litmus) [litmus/sched_litmus.c] - (API) sysrq_dump_trace_buffer() and sysrq_handle_kill_rt_tasks() [litmus/sched_trace.c] - struct kfifo internal buffer name changed (buffer -> buf) [litmus/sched_trace.c] - add_wait_queue_exclusive_locked -> __add_wait_queue_tail_exclusive [litmus/fmlp.c] - syscall numbers for both x86_32 and x86_64
| * | | hrtimer: add init function to properly set hrtimer_start_on_info paramsAndrea Bastoni2010-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This helper function is also useful to remind us that if we use hrtimer_pull outside the scope of triggering remote releases, we need to take care of properly set the "state" field of hrtimer_start_on_info structure.
| * | | Make smp_send_pull_timers() optional.Bjoern B. Brandenburg2010-06-01
| | | | | | | | | | | | | | | | | | | | There is currently no need to implement this in ARM. So let's make it optional instead.
| * | | Change most LitmusRT spinlock_t in raw_spinlock_tAndrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adapt to new schema for spinlock: (tglx 20091217) spinlock - the weakest one, which might sleep in RT raw_spinlock - spinlock which always spins even on RT arch_spinlock - the hardware level architecture dependent implementation ---- Most probably, all the spinlocks changed by this commit will be true spinning lock (raw_spinlock) in PreemptRT (so hopefully we'll need few changes when porting Litmmus to PreemptRT). There are a couple of spinlock that the kernel still defines as spinlock_t (therefore no changes reported in this commit) that might cause us troubles: - wait_queue_t lock is defined as spinlock_t; it is used in: * fmlp.c -- sem->wait.lock * sync.c -- ts_release.wait.lock - rwlock_t used in fifo implementation in sched_trace.c * this need probably to be changed to something always spinning in RT at the expense of increased locking time. ---- This commit also fixes warnings and errors due to the need to include slab.h when using kmalloc() and friends. ---- This commit does not compile.
| * | | Merge branch 'master' into wip-merge-2.6.34Andrea Bastoni2010-05-29
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simple merge between master and 2.6.34 with conflicts resolved. This commit does not compile, the following main problems are still unresolved: - spinlock -> raw_spinlock API changes - kfifo API changes - sched_class API changes Conflicts: Makefile arch/x86/include/asm/hw_irq.h arch/x86/include/asm/unistd_32.h arch/x86/kernel/syscall_table_32.S include/linux/hrtimer.h kernel/sched.c kernel/sched_fair.c
| | * | | Measure timer re-arming in the proper locationAndrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hrtimers are properly rearmed during arm_release_timer() and no longer after rescheduling (with the norqlock mechanism of 2008.3). This commit accordingly updates the locations where measures are taken.
| | * | | Bugfix: clear LITMUS^RT state on fork completelyBjoern B. Brandenburg2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a real-time task forks, then its LITMUS^RT-specific fields should be cleared, because we don't want real-time tasks to spawn new real-time tasks that bypass the plugin's admission control (if any). This was broken in three ways: 1) kernel/fork.c did not erase all of tsk->rt_param, only the first few bytes due to a wrong size argument to memset(). 2) It should have been calling litmus_fork() instead anyway. 3) litmus_fork() was _also_ not clearing all of tsk->rt_param, due to another size argument bug. Interestingly, 1) and 2) can be traced back to the 2007->2008 port, whereas 3) was added by Mitchell much later on (to dead code, no less). I'm really surprised that this never blew up before.
| | * | | Better explanation of jump-to-CFS optimization removalBjoern B. Brandenburg2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GSN-EDF and friends rely on being called even if there is currently no runnable real-time task on the runqueue for (at least) two reasons: 1) To initiate migrations. LITMUS^RT pull tasks for migrations; this requires plugins to be called even if no task is currently present. 2) To maintain invariants when jobs block.
| | * | | Integrate litmus_tick() in task_tick_litmus()Andrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - remove the call to litmus_tick() from scheduler_tick() just after having performed the class task_tick() and integrate litmus_tick() in task_tick_litmus() - task_tick_litmus() is the handler for the litmus class task_tick() method. It is called in non-queued mode from scheduler_tick()
| | * | | [ported from 2008.3] Add release-master supportAndrea Bastoni2010-05-29
| | | | |
| | * | | [ported from 2008.3] Add hrtimer_start_on() APIAndrea Bastoni2010-05-29
| | | | |
| | * | | [ported from 2008.3] Add Stack Resource Policy (SRP) supportAndrea Bastoni2010-05-29
| | | | |
| | * | | [ported from 2008.3] Add File Descriptor Attached Shared Objects (FDSO) ↵Andrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | infrastructure
| | * | | [ported from 2008.3] Add support for quantum alignmentAndrea Bastoni2010-05-29
| | | | |
| | * | | [ported from 2008.3] Add complete_n() callAndrea Bastoni2010-05-29
| | | | |
| | * | | [ported from 2008.3] Add tracing support and hook up Litmus KConfig for x86Andrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - fix requesting more than 2^11 pages (MAX_ORDER) to system allocator Still to be merged: - feather-trace generic implementation
| | * | | [ported from 2008.3] Core LITMUS^RT infrastructureAndrea Bastoni2010-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Port 2008.3 Core LITMUS^RT infrastructure to Linux 2.6.32 litmus_sched_class implements 4 new methods: - prio_changed: void - switched_to: void - get_rr_interval: return infinity (i.e., 0) - select_task_rq: return current cpu
* | | | | Apply k4412 kernel from HardKernel for ODROID-X.Christopher Kenna2012-09-28
| | | | |
* | | | | random: remove rand_initialize_irq()Theodore Ts'o2012-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit c5857ccf293968348e5eb4ebedc68074de3dcda6 upstream. With the new interrupt sampling system, we are no longer using the timer_rand_state structure in the irq descriptor, so we can stop initializing it now. [ Merged in fixes from Sedat to find some last missing references to rand_initialize_irq() ] Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | | | | random: make 'add_interrupt_randomness()' do something saneTheodore Ts'o2012-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 775f4b297b780601e61787b766f306ed3e1d23eb upstream. We've been moving away from add_interrupt_randomness() for various reasons: it's too expensive to do on every interrupt, and flooding the CPU with interrupts could theoretically cause bogus floods of entropy from a somewhat externally controllable source. This solves both problems by limiting the actual randomness addition to just once a second or after 64 interrupts, whicever comes first. During that time, the interrupt cycle data is buffered up in a per-cpu pool. Also, we make sure the the nonblocking pool used by urandom is initialized before we start feeding the normal input pool. This assures that /dev/urandom is returning unpredictable data as soon as possible. (Based on an original patch by Linus, but significantly modified by tytso.) Tested-by: Eric Wustrow <ewust@umich.edu> Reported-by: Eric Wustrow <ewust@umich.edu> Reported-by: Nadia Heninger <nadiah@cs.ucsd.edu> Reported-by: Zakir Durumeric <zakir@umich.edu> Reported-by: J. Alex Halderman <jhalderm@umich.edu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | | | | futex: Forbid uaddr == uaddr2 in futex_wait_requeue_pi()Darren Hart2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 6f7b0a2a5c0fb03be7c25bd1745baa50582348ef upstream. If uaddr == uaddr2, then we have broken the rule of only requeueing from a non-pi futex to a pi futex with this call. If we attempt this, as the trinity test suite manages to do, we miss early wakeups as q.key is equal to key2 (because they are the same uaddr). We will then attempt to dereference the pi_mutex (which would exist had the futex_q been properly requeued to a pi futex) and trigger a NULL pointer dereference. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/ad82bfe7f7d130247fbe2b5b4275654807774227.1342809673.git.dvhart@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | | | | futex: Fix bug in WARN_ON for NULL q.pi_stateDarren Hart2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit f27071cb7fe3e1d37a9dbe6c0dfc5395cd40fa43 upstream. The WARN_ON in futex_wait_requeue_pi() for a NULL q.pi_state was testing the address (&q.pi_state) of the pointer instead of the value (q.pi_state) of the pointer. Correct it accordingly. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/1c85d97f6e5f79ec389a4ead3e367363c74bd09a.1342809673.git.dvhart@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | | | | futex: Test for pi_mutex on fault in futex_wait_requeue_pi()Darren Hart2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit b6070a8d9853eda010a549fa9a09eb8d7269b929 upstream. If fixup_pi_state_owner() faults, pi_mutex may be NULL. Test for pi_mutex != NULL before testing the owner against current and possibly unlocking it. Signed-off-by: Darren Hart <dvhart@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Link: http://lkml.kernel.org/r/dc59890338fc413606f04e5c5b131530734dae3d.1342809673.git.dvhart@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | | | | workqueue: perform cpu down operations from low priority cpu_notifier()Tejun Heo2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 6575820221f7a4dd6eadecf7bf83cdd154335eda upstream. Currently, all workqueue cpu hotplug operations run off CPU_PRI_WORKQUEUE which is higher than normal notifiers. This is to ensure that workqueue is up and running while bringing up a CPU before other notifiers try to use workqueue on the CPU. Per-cpu workqueues are supposed to remain working and bound to the CPU for normal CPU_DOWN_PREPARE notifiers. This holds mostly true even with workqueue offlining running with higher priority because workqueue CPU_DOWN_PREPARE only creates a bound trustee thread which runs the per-cpu workqueue without concurrency management without explicitly detaching the existing workers. However, if the trustee needs to create new workers, it creates unbound workers which may wander off to other CPUs while CPU_DOWN_PREPARE notifiers are in progress. Furthermore, if the CPU down is cancelled, the per-CPU workqueue may end up with workers which aren't bound to the CPU. While reliably reproducible with a convoluted artificial test-case involving scheduling and flushing CPU burning work items from CPU down notifiers, this isn't very likely to happen in the wild, and, even when it happens, the effects are likely to be hidden by the following successful CPU down. Fix it by using different priorities for up and down notifiers - high priority for up operations and low priority for down operations. Workqueue cpu hotplug operations will soon go through further cleanup. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>