aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched_fair.c
Commit message (Collapse)AuthorAge
* sched: fix ideal_runtime calculations for reniced tasksPeter Zijlstra2007-09-05
| | | | | | | | | | | | | | | | | | | fix ideal_runtime: - do not scale it using niced_granularity() it is against sum_exec_delta, so its wall-time, not fair-time. - move the whole check into __check_preempt_curr_fair() so that wakeup preemption can also benefit from the new logic. this also results in code size reduction: text data bss dec hex filename 13391 228 1204 14823 39e7 sched.o.before 13369 228 1204 14801 39d1 sched.o.after Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: improve prev_sum_exec_runtime settingPeter Zijlstra2007-09-05
| | | | | | | | | | | | | | | Second preparatory patch for fix-ideal runtime: Mark prev_sum_exec_runtime at the beginning of our run, the same spot that adds our wait period to wait_runtime. This seems a more natural location to do this, and it also reduces the code a bit: text data bss dec hex filename 13397 228 1204 14829 39ed sched.o.before 13391 228 1204 14823 39e7 sched.o.after Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: simplify __check_preempt_curr_fair()Peter Zijlstra2007-09-05
| | | | | | | | | | | | | | | Preparatory patch for fix-ideal-runtime: simplify __check_preempt_curr_fair(): get rid of the integer return. text data bss dec hex filename 13404 228 1204 14836 39f4 sched.o.before 13393 228 1204 14825 39e9 sched.o.after functionality is unchanged. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: debug: fix cfs_rq->wait_runtime accountingIngo Molnar2007-09-05
| | | | | | | | | | | | | | the cfs_rq->wait_runtime debug/statistics counter was not maintained properly - fix this. this also removes some code: text data bss dec hex filename 13420 228 1204 14852 3a04 sched.o.before 13404 228 1204 14836 39f4 sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* sched: fix niced_granularity() shiftIngo Molnar2007-09-05
| | | | | | | | fix niced_granularity(). This resulted in under-scheduling for CPU-bound negative nice level tasks (and this in turn caused higher than necessary latencies in nice-0 tasks). Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up task_new_fair()Ingo Molnar2007-08-28
| | | | | | | | | cleanup: we have the 'se' and 'curr' entity-pointers already, no need to use p->se and current->se. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Mike Galbraith <efault@gmx.de>
* sched: small schedstat fixIngo Molnar2007-08-28
| | | | | | | | | | small schedstat fix: the cfs_rq->wait_runtime 'sum of all runtimes' statistics counters missed newly forked tasks and thus had a constant negative skew. Fix this. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Mike Galbraith <efault@gmx.de>
* sched: fix wait_start_fair condition in update_stats_wait_end()Ingo Molnar2007-08-28
| | | | | | | | | | | | | | Peter Zijlstra noticed the following bug in SCHED_FEAT_SKIP_INITIAL (which is disabled by default at the moment): it relies on se.wait_start_fair being 0 while update_stats_wait_end() did not recognize a 0 value, so instead of 'skipping' the initial interval we gave the new child a maximum boost of +runtime-limit ... (No impact on the default kernel, but nice to fix for completeness.) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Mike Galbraith <efault@gmx.de>
* sched: call update_curr() in task_tick_fair()Ting Yang2007-08-28
| | | | | | | | | | | update the fair-clock before using it for the key value. [ mingo@elte.hu: small cleanups. ] Signed-off-by: Ting Yang <tingy@cs.umass.edu> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* sched: make the scheduler converge to the ideal latencyIngo Molnar2007-08-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | de-HZ-ification of the granularity defaults unearthed a pre-existing property of CFS: while it correctly converges to the granularity goal, it does not prevent run-time fluctuations in the range of [-gran ... 0 ... +gran]. With the increase of the granularity due to the removal of HZ dependencies, this becomes visible in chew-max output (with 5 tasks running): out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40 out: 27 . 27. 32 | flu: 0 . 0 | ran: 17 . 13 | per: 44 . 40 out: 27 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 36 . 40 out: 29 . 27. 32 | flu: 2 . 0 | ran: 17 . 13 | per: 46 . 40 out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40 out: 29 . 27. 32 | flu: 0 . 0 | ran: 18 . 13 | per: 47 . 40 out: 28 . 27. 32 | flu: 0 . 0 | ran: 9 . 13 | per: 37 . 40 average slice is the ideal 13 msecs and the period is picture-perfect 40 msecs. But the 'ran' field fluctuates around 13.33 msecs and there's no mechanism in CFS to keep that from happening: it's a perfectly valid solution that CFS finds. to fix this we add a granularity/preemption rule that knows about the "target latency", which makes tasks that run longer than the ideal latency run a bit less. The simplest approach is to simply decrease the preemption granularity when a task overruns its ideal latency. For this we have to track how much the task executed since its last preemption. ( this adds a new field to task_struct, but we can eliminate that overhead in 2.6.24 by putting all the scheduler timestamps into an anonymous union. ) with this change in place, chew-max output is fluctuation-less all around: out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40 out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40 out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40 out: 28 . 27. 39 | flu: 0 . 2 | ran: 13 . 13 | per: 41 . 40 out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40 out: 28 . 27. 39 | flu: 0 . 1 | ran: 13 . 13 | per: 41 . 40 this patch has no impact on any fastpath or on any globally observable scheduling property. (unless you have sharp enough eyes to see millisecond-level ruckles in glxgears smoothness :-) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Mike Galbraith <efault@gmx.de>
* sched: fix sleeper bonus limitMike Galbraith2007-08-28
| | | | | | | | | | | | | | | | | | | | | | There is an Amarok song switch time increase (regression) under hefty load. What is happening is that sleeper_bonus is never consumed, and only rarely goes below runtime_limit, so for the most part, Amarok isn't getting any bonus at all. We're keeping sleeper_bonus right at runtime_limit (sched_latency == sched_runtime_limit == 40ms) forever, ie we don't consume if we're lower that that, and don't add if we're above it. One Amarok thread waking (or anybody else) will push us past the threshold, so the next thread waking gets nada, but will reap pain from the previous thread waking until we drop back to runtime_limit. It looks to me like under load, some random task gets a bonus, and everybody else pays, whether deserving or not. This diff fixed the regression for me at any load rate. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* sched: cleanup, sched_granularity -> sched_min_granularityIngo Molnar2007-08-25
| | | | | | | | | due to adaptive granularity scheduling the role of sched_granularity has changed to "minimum granularity", so rename the variable (and the tunable) accordingly. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* sched: adaptive scheduler granularityPeter Zijlstra2007-08-25
| | | | | | | | | | | | | | | | | | Instead of specifying the preemption granularity, specify the wanted latency. By fixing the granlarity to a constany the wakeup latency it a function of the number of running tasks on the rq. Invert this relation. sysctl_sched_granularity becomes a minimum for the dynamic granularity computed from the new sysctl_sched_latency. Then use this latency to do more intelligent granularity decisions: if there are fewer tasks running then we can schedule coarser. This helps performance while still always keeping the latency target. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix startup penalty calculationIngo Molnar2007-08-24
| | | | | | | | fix task startup penalty miscalculation: sysctl_sched_granularity is unsigned int and wait_runtime is long so we first have to convert it to long before turning it negative ... Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: simplify bonus calculation #2Peter Zijlstra2007-08-24
| | | | | | | | | | | | | | | current code: delta = calc_delta_mine(delta_exec, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); Notice that this calc_delta_mine() line is exactly delta_mine, which gives: delta = min((u64)delta_mine, cfs_rq->sleeper_bonus); Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: simplify bonus calculation #1Peter Zijlstra2007-08-24
| | | | | | | | | | | | | | | | | current code: delta = min(cfs_rq->sleeper_bonus, (u64)delta_exec); delta = calc_delta_mine(delta, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); drop the first min(), because we clip against sleeper_bonus in the 3rd line again. That gives: delta = calc_delta_mine(delta_exec, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: tidy up and simplify the bonus balanceIngo Molnar2007-08-24
| | | | | | | | | | | | | | | | | | make the bonus balance more consistent: do not hand out a bonus if there's too much in flight already, and only deduct as much from a runner as it has the capacity. This makes the bonus engine a zero-sum game (as intended). this also simplifies the code: text data bss dec hex filename 34770 2998 24 37792 93a0 sched.o.before 34749 2998 24 37771 938b sched.o.after and it also avoids overscheduling in sleep-happy workloads like hackbench.c. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove HZ dependency from the granularity defaultIngo Molnar2007-08-24
| | | | | | | | | | remove HZ dependency from the granularity default. Use 10 msec for the base granularity, 1 msec for wakeup granularity and 25 msec for batch wakeup granularity. (These defaults are close to the values that the default HZ=250 setting got previously, and thus it's the most common setting.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: CONFIG_SCHED_GROUP_FAIR=y fixletBruce Ashfield2007-08-24
| | | | | | | | | | | | | when I built with CONFIG_FAIR_GROUP_SCHED=y, I need the following change to make things right. [ From: mingo@elte.hu ] this config option is not upstream-configurable right now but lets fix this for completeness. Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix sleeper bonusIngo Molnar2007-08-12
| | | | | | | | | Peter Ziljstra noticed that the sleeper bonus deduction code was not properly rate-limited: a task that scheduled more frequently would get a disproportionately large deduction. So limit the deduction to delta_exec. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix typo in the FAIR_GROUP_SCHED branchIngo Molnar2007-08-10
| | | | | | | while there's no in-tree way to turn group scheduling at the moment, fix a typo in it nevertheless. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: refine negative nice level granularityIngo Molnar2007-08-09
| | | | | | | | | | refine the granularity of negative nice level tasks: let them reschedule more often to offset the effect of them consuming their wait_runtime proportionately slower. (This makes nice-0 task scheduling smoother in the presence of negatively reniced tasks.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix update_stats_enqueue() reniced codepathIngo Molnar2007-08-09
| | | | | | | | the key has to be rescaled to /weight even if it has a positive value. (this change only affects the scheduling of reniced tasks) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: clean up set_curr_task_fair()Ingo Molnar2007-08-09
| | | | | | | | | | | | clean up set_curr_task_fair(). ( identity transformation that causes no change in functionality. ) text data bss dec hex filename 39170 3750 36 42956 a7cc sched.o.before 39170 3750 36 42956 a7cc sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove __update_rq_clock() call from entity_tick()Ingo Molnar2007-08-09
| | | | | | | | | remove __update_rq_clock() call from entity_tick(). no change in functionality because scheduler_tick() already calls __update_rq_clock(). Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' local variablesIngo Molnar2007-08-09
| | | | | | | | final step: remove all (now superfluous) 'u64 now' variables. ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from ->task_new()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from ->task_new(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from ->put_prev_task()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from ->put_prev_task(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from ->pick_next_task()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from ->pick_next_task(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from ->dequeue_task()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from ->dequeue_task(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from ->enqueue_task()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from ->enqueue_task(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from put_prev_entity()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from put_prev_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from pick_next_entity()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from pick_next_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from set_next_entity()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from set_next_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from dequeue_entity()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from dequeue_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from enqueue_entity()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from enqueue_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from enqueue_sleeper()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from enqueue_sleeper(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from __enqueue_sleeper()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from __enqueue_sleeper(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_curr_end()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_curr_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_dequeue()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_dequeue(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_curr_start()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_curr_start(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_wait_end()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_wait_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from __update_stats_wait_end()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from __update_stats_wait_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_enqueue()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_enqueue(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_stats_wait_start()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_stats_wait_start(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from update_curr()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from update_curr(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove the 'u64 now' parameter from print_cfs_rq()Ingo Molnar2007-08-09
| | | | | | | | remove the 'u64 now' parameter from print_cfs_rq(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove 'now' use from assignmentsIngo Molnar2007-08-09
| | | | | | | | | | change all 'now' timestamp uses in assignments to rq->clock. ( this is an identity transformation that causes no functionality change: all such new rq->clock is necessarily preceded by an update_rq_clock() call. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: eliminate __rq_clock() useIngo Molnar2007-08-09
| | | | | | | | | | | eliminate __rq_clock() use by changing it to: __update_rq_clock(rq) now = rq->clock; identity transformation - no change in behavior. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: eliminate rq_clock() useIngo Molnar2007-08-09
| | | | | | | | | | | eliminate rq_clock() use by changing it to: update_rq_clock(rq) now = rq->clock; identity transformation - no change in behavior. Signed-off-by: Ingo Molnar <mingo@elte.hu>