aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched.c
Commit message (Collapse)AuthorAge
* use the new percpu interface for shared dataFenghua Yu2007-07-19
| | | | | | | | | | | | | | | | | | Currently most of the per cpu data, which is accessed by different cpus, has a ____cacheline_aligned_in_smp attribute. Move all this data to the new per cpu shared data section: .data.percpu.shared_aligned. This will seperate the percpu data which is referenced frequently by other cpus from the local only percpu data. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Freezer: make kernel threads nonfreezable by defaultRafael J. Wysocki2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the freezer treats all tasks as freezable, except for the kernel threads that explicitly set the PF_NOFREEZE flag for themselves. This approach is problematic, since it requires every kernel thread to either set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't care for the freezing of tasks at all. It seems better to only require the kernel threads that want to or need to be frozen to use some freezer-related code and to remove any freezer-related code from the other (nonfreezable) kernel threads, which is done in this patch. The patch causes all kernel threads to be nonfreezable by default (ie. to have PF_NOFREEZE set by default) and introduces the set_freezable() function that should be called by the freezable kernel threads in order to unset PF_NOFREEZE. It also makes all of the currently freezable kernel threads call set_freezable(), so it shouldn't cause any (intentional) change of behaviour to appear. Additionally, it updates documentation to describe the freezing of tasks more accurately. [akpm@linux-foundation.org: build fixes] Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Nigel Cunningham <nigel@nigel.suspend2.net> Cc: Pavel Machek <pavel@ucw.cz> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] sched: prettify prio_to_wmult[]Ingo Molnar2007-07-16
| | | | | | prettify the prio_to_wmult[] array. (this could have saved us from the typos) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [PATCH] sched: document prio_to_wmult[]Ingo Molnar2007-07-16
| | | | | | document prio_to_wmult[]. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [PATCH] sched: improve weight-array commentsIngo Molnar2007-07-16
| | | | | | | | improve the comments around the wmult array (which controls the weight of niced tasks). Clarify that to achieve a 10% difference in CPU utilization, a weight multiplier of 1.25 has to be used. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* CFS: Fix missing digit off in wmult tableThomas Gleixner2007-07-13
| | | | | | | | | Roman Zippel noticed another inconsistency of the wmult table. wmult[16] has a missing digit. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] sched: fix show_task()/show_tasks() outputIngo Molnar2007-07-13
| | | | | | | | | | | | | | | fix show_task()/show_tasks() output: - there's no sibling info anymore - the fields were not aligned properly with the description - get rid of the lazy-TLB output: it's been quite some time since we last had a bug there, and when we had a bug it wasnt helped a bit by this debug output. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] sched: allow larger granularityIngo Molnar2007-07-13
| | | | | | | | Allow granularity up to 100 msecs, instead of 10 msecs. (needed on larger boxes) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] sched: fix prio_to_wmult[] for nice 1Mike Galbraith2007-07-13
| | | | | | | | | There's a typo in the values in prio_to_wmult[] for nice level 1. While it did not cause bad CPU distribution, but caused more rescheduling between nice-0 and nice-1 tasks than necessary. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: add CFS creditsIngo Molnar2007-07-09
| | | | | | | | | | | add credits for recent major scheduler contributions: Con Kolivas, for pioneering the fair-scheduling approach Peter Williams, for smpnice Mike Galbraith, for interactivity tuning of CFS Srivatsa Vaddagiri, for group scheduling enhancements Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up sleep_on() APIsIngo Molnar2007-07-09
| | | | | | | | | clean up the sleep_on() APIs: - do not use fastcall - replace fragile macro magic with proper inline functions Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: style cleanupsIngo Molnar2007-07-09
| | | | | | | 4 small style cleanups to sched.c: checkpatch.pl is now happy about the totality of sched.c [ignoring false positives] - yay! ;-) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove unused rq types from sched.cIngo Molnar2007-07-09
| | | | | | | remove unused rq types from sched.c, now that we switched over to CFS. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove interactivity typesIngo Molnar2007-07-09
| | | | | | | remove now unused interactivity-heuristics related defined and types of the old scheduler. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up include files in sched.cIngo Molnar2007-07-09
| | | | | | clean up include files in sched.c, they were still old-style <asm/>. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: turn on the use of unstable eventsIngo Molnar2007-07-09
| | | | | | make use of sched-clock-unstable events. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: x86, track TSC-unstable eventsIngo Molnar2007-07-09
| | | | | | | | track TSC-unstable events and propagate it to the scheduler code. Also allow sched_clock() to be used when the TSC is unstable, the rq_clock() wrapper creates a reliable clock out of it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: cfs core codeIngo Molnar2007-07-09
| | | | | | | | | | | | | | | | apply the CFS core code. this change switches over the scheduler core to CFS's modular design and makes use of kernel/sched_fair/rt/idletask.c to implement Linux's scheduling policies. thanks to Andrew Morton and Thomas Gleixner for lots of detailed review feedback and for fixlets. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com> Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
* sched: remove the sleep-bonus interactivity codeIngo Molnar2007-07-09
| | | | | | | | | remove the sleep-bonus interactivity code from the core scheduler. scheduling policy is implemented in the policy modules, and CFS does not need such type of heuristics. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove expired_starving()Ingo Molnar2007-07-09
| | | | | | | | | remove the expired_starving() heuristics from the core scheduler. CFS does not need it, and this did not really work well in practice anyway, due to the rq->nr_running multiplier to STARVATION_LIMIT. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove sleep_typeIngo Molnar2007-07-09
| | | | | | | | remove the sleep_type heuristics from the core scheduler - scheduling policy is implemented in the scheduling-policy modules. (and CFS does not use this type of sleep-type heuristics) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: cfs, add load-calculation methodsIngo Molnar2007-07-09
| | | | | | add the new load-calculation methods of CFS. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up __normal_prio() positionIngo Molnar2007-07-09
| | | | | | | | clean up: move __normal_prio() in head of normal_prio(). no code changed. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: cleanup: move dequeue/enqueue_task()Ingo Molnar2007-07-09
| | | | | | | cleanup: move dequeue/enqueue_task() to a more logical place, to not split up __normal_prio()/normal_prio(). Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: move around resched_task()Ingo Molnar2007-07-09
| | | | | | | move resched_task()/resched_cpu() into the 'public interfaces' section of sched.c, for use by kernel/sched_fair/rt/idletask.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up the rt priority macrosIngo Molnar2007-07-09
| | | | | | clean up the rt priority macros, pointed out by Andrew Morton. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add cfs_rq opsIngo Molnar2007-07-09
| | | | | | | | add the set_task_cfs_rq() abstraction needed by CONFIG_FAIR_GROUP_SCHED. (not activated yet) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: make posix-cpu-timers use CFS's accounting informationIngo Molnar2007-07-09
| | | | | | update the posix-cpu-timers code to use CFS's CPU accounting information. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add rq_clock()/__rq_clock()Ingo Molnar2007-07-09
| | | | | | | | add rq_clock()/__rq_clock(), a robust wrapper around sched_clock(), used by CFS. It protects against common type of sched_clock() problems (caused by hardware): time warps forwards and backwards. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: cfs rq data typesIngo Molnar2007-07-09
| | | | | | | | | add the CFS rq data types to sched.c. (the old scheduler fields are still intact, they are removed by a later patch) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: move code into kernel/sched_stats.hIngo Molnar2007-07-09
| | | | | | | | | create sched_stats.h and move sched.c schedstats code into it. This cleans up sched.c a bit. no code changes are caused by this patch. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add init_idle_bootup_task()Ingo Molnar2007-07-09
| | | | | | | | add the init_idle_bootup_task() callback to the bootup thread, unused at the moment. (CFS will use it to switch the scheduling class of the boot thread to the idle class) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove sched_exit()Ingo Molnar2007-07-09
| | | | | | | | | remove sched_exit(): the elaborate dance of us trying to recover timeslices given to child tasks never really worked. CFS does not need it either. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: uninline set_task_cpu()Ingo Molnar2007-07-09
| | | | | | uninline set_task_cpu(): CFS will add more code to it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: zap the migration init / cache-hot balancing codeIngo Molnar2007-07-09
| | | | | | | | | | | | | | | | | | | | | | the SMP load-balancer uses the boot-time migration-cost estimation code to attempt to improve the quality of balancing. The reason for this code is that the discrete priority queues do not preserve the order of scheduling accurately, so the load-balancer skips tasks that were running on a CPU 'recently'. this code is fundamental fragile: the boot-time migration cost detector doesnt really work on systems that had large L3 caches, it caused boot delays on large systems and the whole cache-hot concept made the balancing code pretty undeterministic as well. (and hey, i wrote most of it, so i can say it out loud that it sucks ;-) under CFS the same purpose of cache affinity can be achieved without any special cache-hot special-case: tasks are sorted in the 'timeline' tree and the SMP balancer picks tasks from the left side of the tree, thus the most cache-cold task is balanced automatically. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: rename idle_type/SCHED_IDLEIngo Molnar2007-07-09
| | | | | | | | enum idle_type (used by the load-balancer) clashes with the SCHED_IDLE name that we want to introduce. 'CPU_IDLE' instead of 'SCHED_IDLE' is more descriptive as well. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix next_interval determination in idle_balance()Christoph Lameter2007-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | The intervals of domains that do not have SD_BALANCE_NEWIDLE must be considered for the calculation of the time of the next balance. Otherwise we may defer rebalancing forever. Siddha also spotted that the conversion of the balance interval to jiffies is missing. Fix that to. From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> also continue the loop if !(sd->flags & SD_LOAD_BALANCE). Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> It did in fact trigger under all three of mainline, CFS, and -rt including CFS -- see below for a couple of emails from last Friday giving results for these three on the AMD box (where it happened) and on a single-quad NUMA-Q system (where it did not, at least not with such severity). Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Fix possible runqueue lock starvation in wait_task_inactive()Linus Torvalds2007-06-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Miklos Szeredi reported very long pauses (several seconds, sometimes more) on his T60 (with a Core2Duo) which he managed to track down to wait_task_inactive()'s open-coded busy-loop. He observed that an interrupt on one core tries to acquire the runqueue-lock but does not succeed in doing so for a very long time - while wait_task_inactive() on the other core loops waiting for the first core to deschedule a task (which it wont do while spinning in an interrupt handler). This rewrites wait_task_inactive() to do all its waiting optimistically without any locks taken at all, and then just double-check the end result with the proper runqueue lock held over just a very short section. If there were races in the optimistic wait, of a preemption event scheduled the process away, we simply re-synchronize, and start over. So the code now looks like this: repeat: /* Unlocked, optimistic looping! */ rq = task_rq(p); while (task_running(rq, p)) cpu_relax(); /* Get the *real* values */ rq = task_rq_lock(p, &flags); running = task_running(rq, p); array = p->array; task_rq_unlock(rq, &flags); /* Check them.. */ if (unlikely(running)) { cpu_relax(); goto repeat; } /* Preempted away? Yield if so.. */ if (unlikely(array)) { yield(); goto repeat; } Basically, that first "while()" loop is done entirely without any locking at all (and doesn't check for the case where the target process might have been preempted away), and so it's possibly "incorrect", but we don't really care. Both the runqueue used, and the "task_running()" check might be the wrong tests, but they won't oops - they just mean that we could possibly get the wrong results due to lack of locking and exit the loop early in the case of a race condition. So once we've exited the loop, we then get the proper (and careful) rq lock, and check the running/runnable state _safely_. And if it turns out that our quick-and-dirty and unsafe loop was wrong after all, we just go back and try it all again. (The patch also adds a lot of comments, which is the actual bulk of it all, to make it more obvious why we can do these things without holding the locks). Thanks to Miklos for all the testing and tracking it down. Tested-by: Miklos Szeredi <miklos@szeredi.hu> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: fix SysRq-N (normalize RT tasks)Ingo Molnar2007-06-18
| | | | | | | | | | | | | | | | | | Gene Heskett reported the following problem while testing CFS: SysRq-N is not always effective in normalizing tasks back to SCHED_OTHER. The reason for that turns out to be the following bug: - normalize_rt_tasks() uses for_each_process() to iterate through all tasks in the system. The problem is, this method does not iterate through all tasks, it iterates through all thread groups. The proper mechanism to enumerate over all threads is to use a do_each_thread() + while_each_thread() loop. Reported-by: Gene Heskett <gene.heskett@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Prevent going idle with softirq pendingThomas Gleixner2007-05-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NOHZ patch contains a check for softirqs pending when a CPU goes idle. The BUG is unrelated to NOHZ, it just was made visible by the NOHZ patch. The BUG showed up mainly on P4 / hyperthreading enabled machines which lead the investigations into the wrong direction in the first place. The real cause is in cond_resched_softirq(): cond_resched_softirq() is enabling softirqs without invoking the softirq daemon when softirqs are pending. This leads to the warning message in the NOHZ idle code: t1 runs softirq disabled code on CPU#0 interrupt happens, softirq is raised, but deferred (softirqs disabled) t1 calls cond_resched_softirq() enables softirqs via _local_bh_enable() calls schedule() t2 runs t1 is migrated to CPU#1 t2 is done and invokes idle() NOHZ detects the pending softirq Fix: change _local_bh_enable() to local_bh_enable() so the softirq daemon is invoked. Thanks to Anant Nitya for debugging this with great patience ! Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Add suspend-related notifications for CPU hotplugRafael J. Wysocki2007-05-09
| | | | | | | | | | | | | | | | | | | | | Since nonboot CPUs are now disabled after tasks and devices have been frozen and the CPU hotplug infrastructure is used for this purpose, we need special CPU hotplug notifications that will help the CPU-hotplug-aware subsystems distinguish normal CPU hotplug events from CPU hotplug events related to a system-wide suspend or resume operation in progress. This patch introduces such notifications and causes them to be used during suspend and resume transitions. It also changes all of the CPU-hotplug-aware subsystems to take these notifications into consideration (for now they are handled in the same way as the corresponding "normal" ones). [oleg@tv-sign.ru: cleanups] Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Eliminate lock_cpu_hotplug in kernel/schedcGautham R Shenoy2007-05-09
| | | | | | | | | | | | | | Eliminate lock_cpu_hotplug from kernel/sched.c and use sched_hotcpu_mutex instead to postpone a hotplug event. In the migration_call hotcpu callback function, take sched_hotcpu_mutex while handling the event CPU_LOCK_ACQUIRE and release it while handling CPU_LOCK_RELEASE event. [akpm@linux-foundation.org: fix deadlock] Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* revert 'sched: redundant reschedule when set_user_nice() boosts a prio of a ↵Andrew Morton2007-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | task from the "expired" array' Revert commit bd53f96ca54a21c07e7a0ae1886fa623d370b85f. Con says: This is no good, sorry. The one I saw originally was with the staircase deadline cpu scheduler in situ and was different. #define TASK_PREEMPTS_CURR(p, rq) \ ((p)->prio < (rq)->curr->prio) (((p)->prio < (rq)->curr->prio) && ((p)->array == (rq)->active)) This will fail to wake up a runqueue for a task that has been migrated to the expired array of a runqueue which is otherwise idle which can happen with smp balancing, Cc: Dmitry Adamushko <dmitry.adamushko@gmail.com> Cc: Con Kolivas <kernel@kolivas.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: align rq to cacheline boundarySiddha, Suresh B2007-05-08
| | | | | | | | | | | | Align the per cpu runqueue to the cacheline boundary. This will minimize the number of cachelines touched during remote wakeup. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Ravikiran G Thirumalai <kiran@scalex86.org> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: redundant reschedule when set_user_nice() boosts a prio of a task ↵Dmitry Adamushko2007-05-08
| | | | | | | | | | | | | | | | | | | from the "expired" array - Make TASK_PREEMPTS_CURR(task, rq) return "true" only if the task's prio is higher than the current's one and the task is in the "active" array. This ensures we don't make redundant resched_task() calls when the task is in the "expired" array (as may happen now in set_user_prio(), rt_mutex_setprio() and pull_task() ) ; - generalise conditions for a call to resched_task() in set_user_nice(), rt_mutex_setprio() and sched_setscheduler() Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com> Cc: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: optimize siblings status check logic in wake_idle()Siddha, Suresh B2007-05-08
| | | | | | | | | | | | | | | | When a logical cpu 'x' already has more than one process running, then most likely the siblings of that cpu 'x' must be busy. Otherwise the idle siblings would have likely(in most of the scenarios) picked up the extra load making the load on 'x' atmost one. Use this logic to eliminate the siblings status check and minimize the cache misses encountered on a heavily loaded system. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Speed up divides by cpu_power in schedulerEric Dumazet2007-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I noticed expensive divides done in try_to_wakeup() and find_busiest_group() on a bi dual core Opteron machine (total of 4 cores), moderatly loaded (15.000 context switch per second) oprofile numbers : CPU: AMD64 processors, speed 2600.05 MHz (estimated) Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 50000 samples % symbol name ... 613914 1.0498 try_to_wake_up 834 0.0013 :ffffffff80227ae1: div %rcx 77513 0.1191 :ffffffff80227ae4: mov %rax,%r11 608893 1.0413 find_busiest_group 1841 0.0031 :ffffffff802260bf: div %rdi 140109 0.2394 :ffffffff802260c2: test %sil,%sil Some of these divides can use the reciprocal divides we introduced some time ago (currently used in slab AFAIK) We can assume a load will fit in a 32bits number, because with a SCHED_LOAD_SCALE=128 value, its still a theorical limit of 33554432 When/if we reach this limit one day, probably cpus will have a fast hardware divide and we can zap the reciprocal divide trick. Ingo suggested to rename cpu_power to __cpu_power to make clear it should not be modified without changing its reciprocal value too. I did not convert the divide in cpu_avg_load_per_task(), because tracking nr_running changes may be not worth it ? We could use a static table of 32 reciprocal values but it would add a conditional branch and table lookup. [akpm@linux-foundation.org: !SMP build fix] Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: dynticks idle load balancingSiddha, Suresh B2007-05-08
| | | | | | | | | | | | | | | | | | | | | | | | Fix the process idle load balancing in the presence of dynticks. cpus for which ticks are stopped will sleep till the next event wakes it up. Potentially these sleeps can be for large durations and during which today, there is no periodic idle load balancing being done. This patch nominates an owner among the idle cpus, which does the idle load balancing on behalf of the other idle cpus. And once all the cpus are completely idle, then we can stop this idle load balancing too. Checks added in fast path are minimized. Whenever there are busy cpus in the system, there will be an owner(idle cpu) doing the system wide idle load balancing. Open items: 1. Intelligent owner selection (like an idle core in a busy package). 2. Merge with rcu's nohz_cpu_mask? Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: fix idle load balancing in softirqd contextSiddha, Suresh B2007-05-08
| | | | | | | | | | | | | | | | | | Periodic load balancing in recent kernels happen in the softirq. In certain -rt configurations, these softirqs are handled in softirqd context. And hence the check for idle processor was always returning busy (as nr_running > 1). This patch captures the idle information at the tick and passes this info to softirq context through an element 'idle_at_tick' in rq. [kernel@kolivas.org: Fix reverse idle at tick logic] Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* add touch_all_softlockup_watchdogs()Jeremy Fitzhardinge2007-05-08
| | | | | | | | | | | | | | | | | | | | Add touch_all_softlockup_watchdogs() to allow the softlockup watchdog timers on all cpus to be updated. This is used to prevent sysrq-t from generating a spurious watchdog message when generating lots of output. Softlockup watchdogs use sched_clock() as its timebase, which is inherently per-cpu (at least, when it is measuring unstolen time). Because of this, it isn't possible for one CPU to directly update the other CPU's timers, but it is possible to tell the other CPUs to do update themselves appropriately. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Acked-by: Chris Lalancette <clalance@redhat.com> Signed-off-by: Prarit Bhargava <prarit@redhat.com> Cc: Rick Lindsley <ricklind@us.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>