aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
commit534c97b0950b1967bca1c753aeaed32f5db40264 (patch)
tree9421d26e4f6d479d1bc32b036a731b065daab0fa /kernel/sched
parent64049d1973c1735f543eb7a55653e291e108b0cb (diff)
parent265f22a975c1e4cc3a4d1f94a3ec53ffbb6f5b9f (diff)
Merge branch 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull 'full dynticks' support from Ingo Molnar: "This tree from Frederic Weisbecker adds a new, (exciting! :-) core kernel feature to the timer and scheduler subsystems: 'full dynticks', or CONFIG_NO_HZ_FULL=y. This feature extends the nohz variable-size timer tick feature from idle to busy CPUs (running at most one task) as well, potentially reducing the number of timer interrupts significantly. This feature got motivated by real-time folks and the -rt tree, but the general utility and motivation of full-dynticks runs wider than that: - HPC workloads get faster: CPUs running a single task should be able to utilize a maximum amount of CPU power. A periodic timer tick at HZ=1000 can cause a constant overhead of up to 1.0%. This feature removes that overhead - and speeds up the system by 0.5%-1.0% on typical distro configs even on modern systems. - Real-time workload latency reduction: CPUs running critical tasks should experience as little jitter as possible. The last remaining source of kernel-related jitter was the periodic timer tick. - A single task executing on a CPU is a pretty common situation, especially with an increasing number of cores/CPUs, so this feature helps desktop and mobile workloads as well. The cost of the feature is mainly related to increased timer reprogramming overhead when a CPU switches its tick period, and thus slightly longer to-idle and from-idle latency. Configuration-wise a third mode of operation is added to the existing two NOHZ kconfig modes: - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named as a config option. This is the traditional Linux periodic tick design: there's a HZ tick going on all the time, regardless of whether a CPU is idle or not. - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the periodic tick when a CPU enters idle mode. - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the tick when a CPU is idle, also slows the tick down to 1 Hz (one timer interrupt per second) when only a single task is running on a CPU. The .config behavior is compatible: existing !CONFIG_NO_HZ and CONFIG_NO_HZ=y settings get translated to the new values, without the user having to configure anything. CONFIG_NO_HZ_FULL is turned off by default. This feature is based on a lot of infrastructure work that has been steadily going upstream in the last 2-3 cycles: related RCU support and non-periodic cputime support in particular is upstream already. This tree adds the final pieces and activates the feature. The pull request is marked RFC because: - it's marked 64-bit only at the moment - the 32-bit support patch is small but did not get ready in time. - it has a number of fresh commits that came in after the merge window. The overwhelming majority of commits are from before the merge window, but still some aspects of the tree are fresh and so I marked it RFC. - it's a pretty wide-reaching feature with lots of effects - and while the components have been in testing for some time, the full combination is still not very widely used. That it's default-off should reduce its regression abilities and obviously there are no known regressions with CONFIG_NO_HZ_FULL=y enabled either. - the feature is not completely idempotent: there is no 100% equivalent replacement for a periodic scheduler/timer tick. In particular there's ongoing work to map out and reduce its effects on scheduler load-balancing and statistics. This should not impact correctness though, there are no known regressions related to this feature at this point. - it's a pretty ambitious feature that with time will likely be enabled by most Linux distros, and we'd like you to make input on its design/implementation, if you dislike some aspect we missed. Without flaming us to crisp! :-) Future plans: - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off the periodic tick altogether when there's a single busy task on a CPU. We'd first like 1 Hz to be exposed more widely before we go for the 0 Hz target though. - once we reach 0 Hz we can remove the periodic tick assumption from nr_running>=2 as well, by essentially interrupting busy tasks only as frequently as the sched_latency constraints require us to do - once every 4-40 msecs, depending on nr_running. I am personally leaning towards biting the bullet and doing this in v3.10, like the -rt tree this effort has been going on for too long - but the final word is up to you as usual. More technical details can be found in Documentation/timers/NO_HZ.txt" * 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) sched: Keep at least 1 tick per second for active dynticks tasks rcu: Fix full dynticks' dependency on wide RCU nocb mode nohz: Protect smp_processor_id() in tick_nohz_task_switch() nohz_full: Add documentation. cputime_nsecs: use math64.h for nsec resolution conversion helpers nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config nohz: Reduce overhead under high-freq idling patterns nohz: Remove full dynticks' superfluous dependency on RCU tree nohz: Fix unavailable tick_stop tracepoint in dynticks idle nohz: Add basic tracing nohz: Select wide RCU nocb for full dynticks nohz: Disable the tick when irq resume in full dynticks CPU nohz: Re-evaluate the tick for the new task after a context switch nohz: Prepare to stop the tick on irq exit nohz: Implement full dynticks kick nohz: Re-evaluate the tick from the scheduler IPI sched: New helper to prevent from stopping the tick in full dynticks sched: Kick full dynticks CPU that have more than one task enqueued. perf: New helper to prevent full dynticks CPUs from stopping tick perf: Kick full dynticks CPU if events rotation is needed ...
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c92
-rw-r--r--kernel/sched/fair.c10
-rw-r--r--kernel/sched/idle_task.c1
-rw-r--r--kernel/sched/sched.h25
4 files changed, 110 insertions, 18 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5662f58f0b69..58453b8272fd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -544,7 +544,7 @@ void resched_cpu(int cpu)
544 raw_spin_unlock_irqrestore(&rq->lock, flags); 544 raw_spin_unlock_irqrestore(&rq->lock, flags);
545} 545}
546 546
547#ifdef CONFIG_NO_HZ 547#ifdef CONFIG_NO_HZ_COMMON
548/* 548/*
549 * In the semi idle case, use the nearest busy cpu for migrating timers 549 * In the semi idle case, use the nearest busy cpu for migrating timers
550 * from an idle cpu. This is good for power-savings. 550 * from an idle cpu. This is good for power-savings.
@@ -582,7 +582,7 @@ unlock:
582 * account when the CPU goes back to idle and evaluates the timer 582 * account when the CPU goes back to idle and evaluates the timer
583 * wheel for the next timer event. 583 * wheel for the next timer event.
584 */ 584 */
585void wake_up_idle_cpu(int cpu) 585static void wake_up_idle_cpu(int cpu)
586{ 586{
587 struct rq *rq = cpu_rq(cpu); 587 struct rq *rq = cpu_rq(cpu);
588 588
@@ -612,20 +612,56 @@ void wake_up_idle_cpu(int cpu)
612 smp_send_reschedule(cpu); 612 smp_send_reschedule(cpu);
613} 613}
614 614
615static bool wake_up_full_nohz_cpu(int cpu)
616{
617 if (tick_nohz_full_cpu(cpu)) {
618 if (cpu != smp_processor_id() ||
619 tick_nohz_tick_stopped())
620 smp_send_reschedule(cpu);
621 return true;
622 }
623
624 return false;
625}
626
627void wake_up_nohz_cpu(int cpu)
628{
629 if (!wake_up_full_nohz_cpu(cpu))
630 wake_up_idle_cpu(cpu);
631}
632
615static inline bool got_nohz_idle_kick(void) 633static inline bool got_nohz_idle_kick(void)
616{ 634{
617 int cpu = smp_processor_id(); 635 int cpu = smp_processor_id();
618 return idle_cpu(cpu) && test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)); 636 return idle_cpu(cpu) && test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu));
619} 637}
620 638
621#else /* CONFIG_NO_HZ */ 639#else /* CONFIG_NO_HZ_COMMON */
622 640
623static inline bool got_nohz_idle_kick(void) 641static inline bool got_nohz_idle_kick(void)
624{ 642{
625 return false; 643 return false;
626} 644}
627 645
628#endif /* CONFIG_NO_HZ */ 646#endif /* CONFIG_NO_HZ_COMMON */
647
648#ifdef CONFIG_NO_HZ_FULL
649bool sched_can_stop_tick(void)
650{
651 struct rq *rq;
652
653 rq = this_rq();
654
655 /* Make sure rq->nr_running update is visible after the IPI */
656 smp_rmb();
657
658 /* More than one running task need preemption */
659 if (rq->nr_running > 1)
660 return false;
661
662 return true;
663}
664#endif /* CONFIG_NO_HZ_FULL */
629 665
630void sched_avg_update(struct rq *rq) 666void sched_avg_update(struct rq *rq)
631{ 667{
@@ -1357,7 +1393,8 @@ static void sched_ttwu_pending(void)
1357 1393
1358void scheduler_ipi(void) 1394void scheduler_ipi(void)
1359{ 1395{
1360 if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick()) 1396 if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick()
1397 && !tick_nohz_full_cpu(smp_processor_id()))
1361 return; 1398 return;
1362 1399
1363 /* 1400 /*
@@ -1374,6 +1411,7 @@ void scheduler_ipi(void)
1374 * somewhat pessimize the simple resched case. 1411 * somewhat pessimize the simple resched case.
1375 */ 1412 */
1376 irq_enter(); 1413 irq_enter();
1414 tick_nohz_full_check();
1377 sched_ttwu_pending(); 1415 sched_ttwu_pending();
1378 1416
1379 /* 1417 /*
@@ -1855,6 +1893,8 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
1855 kprobe_flush_task(prev); 1893 kprobe_flush_task(prev);
1856 put_task_struct(prev); 1894 put_task_struct(prev);
1857 } 1895 }
1896
1897 tick_nohz_task_switch(current);
1858} 1898}
1859 1899
1860#ifdef CONFIG_SMP 1900#ifdef CONFIG_SMP
@@ -2118,7 +2158,7 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active)
2118 return load >> FSHIFT; 2158 return load >> FSHIFT;
2119} 2159}
2120 2160
2121#ifdef CONFIG_NO_HZ 2161#ifdef CONFIG_NO_HZ_COMMON
2122/* 2162/*
2123 * Handle NO_HZ for the global load-average. 2163 * Handle NO_HZ for the global load-average.
2124 * 2164 *
@@ -2344,12 +2384,12 @@ static void calc_global_nohz(void)
2344 smp_wmb(); 2384 smp_wmb();
2345 calc_load_idx++; 2385 calc_load_idx++;
2346} 2386}
2347#else /* !CONFIG_NO_HZ */ 2387#else /* !CONFIG_NO_HZ_COMMON */
2348 2388
2349static inline long calc_load_fold_idle(void) { return 0; } 2389static inline long calc_load_fold_idle(void) { return 0; }
2350static inline void calc_global_nohz(void) { } 2390static inline void calc_global_nohz(void) { }
2351 2391
2352#endif /* CONFIG_NO_HZ */ 2392#endif /* CONFIG_NO_HZ_COMMON */
2353 2393
2354/* 2394/*
2355 * calc_load - update the avenrun load estimates 10 ticks after the 2395 * calc_load - update the avenrun load estimates 10 ticks after the
@@ -2509,7 +2549,7 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load,
2509 sched_avg_update(this_rq); 2549 sched_avg_update(this_rq);
2510} 2550}
2511 2551
2512#ifdef CONFIG_NO_HZ 2552#ifdef CONFIG_NO_HZ_COMMON
2513/* 2553/*
2514 * There is no sane way to deal with nohz on smp when using jiffies because the 2554 * There is no sane way to deal with nohz on smp when using jiffies because the
2515 * cpu doing the jiffies update might drift wrt the cpu doing the jiffy reading 2555 * cpu doing the jiffies update might drift wrt the cpu doing the jiffy reading
@@ -2569,7 +2609,7 @@ void update_cpu_load_nohz(void)
2569 } 2609 }
2570 raw_spin_unlock(&this_rq->lock); 2610 raw_spin_unlock(&this_rq->lock);
2571} 2611}
2572#endif /* CONFIG_NO_HZ */ 2612#endif /* CONFIG_NO_HZ_COMMON */
2573 2613
2574/* 2614/*
2575 * Called from scheduler_tick() 2615 * Called from scheduler_tick()
@@ -2696,7 +2736,34 @@ void scheduler_tick(void)
2696 rq->idle_balance = idle_cpu(cpu); 2736 rq->idle_balance = idle_cpu(cpu);
2697 trigger_load_balance(rq, cpu); 2737 trigger_load_balance(rq, cpu);
2698#endif 2738#endif
2739 rq_last_tick_reset(rq);
2740}
2741
2742#ifdef CONFIG_NO_HZ_FULL
2743/**
2744 * scheduler_tick_max_deferment
2745 *
2746 * Keep at least one tick per second when a single
2747 * active task is running because the scheduler doesn't
2748 * yet completely support full dynticks environment.
2749 *
2750 * This makes sure that uptime, CFS vruntime, load
2751 * balancing, etc... continue to move forward, even
2752 * with a very low granularity.
2753 */
2754u64 scheduler_tick_max_deferment(void)
2755{
2756 struct rq *rq = this_rq();
2757 unsigned long next, now = ACCESS_ONCE(jiffies);
2758
2759 next = rq->last_sched_tick + HZ;
2760
2761 if (time_before_eq(next, now))
2762 return 0;
2763
2764 return jiffies_to_usecs(next - now) * NSEC_PER_USEC;
2699} 2765}
2766#endif
2700 2767
2701notrace unsigned long get_parent_ip(unsigned long addr) 2768notrace unsigned long get_parent_ip(unsigned long addr)
2702{ 2769{
@@ -6951,9 +7018,12 @@ void __init sched_init(void)
6951 INIT_LIST_HEAD(&rq->cfs_tasks); 7018 INIT_LIST_HEAD(&rq->cfs_tasks);
6952 7019
6953 rq_attach_root(rq, &def_root_domain); 7020 rq_attach_root(rq, &def_root_domain);
6954#ifdef CONFIG_NO_HZ 7021#ifdef CONFIG_NO_HZ_COMMON
6955 rq->nohz_flags = 0; 7022 rq->nohz_flags = 0;
6956#endif 7023#endif
7024#ifdef CONFIG_NO_HZ_FULL
7025 rq->last_sched_tick = 0;
7026#endif
6957#endif 7027#endif
6958 init_rq_hrtick(rq); 7028 init_rq_hrtick(rq);
6959 atomic_set(&rq->nr_iowait, 0); 7029 atomic_set(&rq->nr_iowait, 0);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8bf7081b1ec5..c61a614465c8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5355,7 +5355,7 @@ out_unlock:
5355 return 0; 5355 return 0;
5356} 5356}
5357 5357
5358#ifdef CONFIG_NO_HZ 5358#ifdef CONFIG_NO_HZ_COMMON
5359/* 5359/*
5360 * idle load balancing details 5360 * idle load balancing details
5361 * - When one of the busy CPUs notice that there may be an idle rebalancing 5361 * - When one of the busy CPUs notice that there may be an idle rebalancing
@@ -5572,9 +5572,9 @@ out:
5572 rq->next_balance = next_balance; 5572 rq->next_balance = next_balance;
5573} 5573}
5574 5574
5575#ifdef CONFIG_NO_HZ 5575#ifdef CONFIG_NO_HZ_COMMON
5576/* 5576/*
5577 * In CONFIG_NO_HZ case, the idle balance kickee will do the 5577 * In CONFIG_NO_HZ_COMMON case, the idle balance kickee will do the
5578 * rebalancing for all the cpus for whom scheduler ticks are stopped. 5578 * rebalancing for all the cpus for whom scheduler ticks are stopped.
5579 */ 5579 */
5580static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) 5580static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle)
@@ -5717,7 +5717,7 @@ void trigger_load_balance(struct rq *rq, int cpu)
5717 if (time_after_eq(jiffies, rq->next_balance) && 5717 if (time_after_eq(jiffies, rq->next_balance) &&
5718 likely(!on_null_domain(cpu))) 5718 likely(!on_null_domain(cpu)))
5719 raise_softirq(SCHED_SOFTIRQ); 5719 raise_softirq(SCHED_SOFTIRQ);
5720#ifdef CONFIG_NO_HZ 5720#ifdef CONFIG_NO_HZ_COMMON
5721 if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu))) 5721 if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
5722 nohz_balancer_kick(cpu); 5722 nohz_balancer_kick(cpu);
5723#endif 5723#endif
@@ -6187,7 +6187,7 @@ __init void init_sched_fair_class(void)
6187#ifdef CONFIG_SMP 6187#ifdef CONFIG_SMP
6188 open_softirq(SCHED_SOFTIRQ, run_rebalance_domains); 6188 open_softirq(SCHED_SOFTIRQ, run_rebalance_domains);
6189 6189
6190#ifdef CONFIG_NO_HZ 6190#ifdef CONFIG_NO_HZ_COMMON
6191 nohz.next_balance = jiffies; 6191 nohz.next_balance = jiffies;
6192 zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT); 6192 zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT);
6193 cpu_notifier(sched_ilb_notifier, 0); 6193 cpu_notifier(sched_ilb_notifier, 0);
diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c
index b8ce77328341..d8da01008d39 100644
--- a/kernel/sched/idle_task.c
+++ b/kernel/sched/idle_task.c
@@ -17,6 +17,7 @@ select_task_rq_idle(struct task_struct *p, int sd_flag, int flags)
17static void pre_schedule_idle(struct rq *rq, struct task_struct *prev) 17static void pre_schedule_idle(struct rq *rq, struct task_struct *prev)
18{ 18{
19 idle_exit_fair(rq); 19 idle_exit_fair(rq);
20 rq_last_tick_reset(rq);
20} 21}
21 22
22static void post_schedule_idle(struct rq *rq) 23static void post_schedule_idle(struct rq *rq)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4c225c4c7111..ce39224d6155 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -5,6 +5,7 @@
5#include <linux/mutex.h> 5#include <linux/mutex.h>
6#include <linux/spinlock.h> 6#include <linux/spinlock.h>
7#include <linux/stop_machine.h> 7#include <linux/stop_machine.h>
8#include <linux/tick.h>
8 9
9#include "cpupri.h" 10#include "cpupri.h"
10#include "cpuacct.h" 11#include "cpuacct.h"
@@ -405,10 +406,13 @@ struct rq {
405 #define CPU_LOAD_IDX_MAX 5 406 #define CPU_LOAD_IDX_MAX 5
406 unsigned long cpu_load[CPU_LOAD_IDX_MAX]; 407 unsigned long cpu_load[CPU_LOAD_IDX_MAX];
407 unsigned long last_load_update_tick; 408 unsigned long last_load_update_tick;
408#ifdef CONFIG_NO_HZ 409#ifdef CONFIG_NO_HZ_COMMON
409 u64 nohz_stamp; 410 u64 nohz_stamp;
410 unsigned long nohz_flags; 411 unsigned long nohz_flags;
411#endif 412#endif
413#ifdef CONFIG_NO_HZ_FULL
414 unsigned long last_sched_tick;
415#endif
412 int skip_clock_update; 416 int skip_clock_update;
413 417
414 /* capture load from *all* tasks on this cpu: */ 418 /* capture load from *all* tasks on this cpu: */
@@ -1072,6 +1076,16 @@ static inline u64 steal_ticks(u64 steal)
1072static inline void inc_nr_running(struct rq *rq) 1076static inline void inc_nr_running(struct rq *rq)
1073{ 1077{
1074 rq->nr_running++; 1078 rq->nr_running++;
1079
1080#ifdef CONFIG_NO_HZ_FULL
1081 if (rq->nr_running == 2) {
1082 if (tick_nohz_full_cpu(rq->cpu)) {
1083 /* Order rq->nr_running write against the IPI */
1084 smp_wmb();
1085 smp_send_reschedule(rq->cpu);
1086 }
1087 }
1088#endif
1075} 1089}
1076 1090
1077static inline void dec_nr_running(struct rq *rq) 1091static inline void dec_nr_running(struct rq *rq)
@@ -1079,6 +1093,13 @@ static inline void dec_nr_running(struct rq *rq)
1079 rq->nr_running--; 1093 rq->nr_running--;
1080} 1094}
1081 1095
1096static inline void rq_last_tick_reset(struct rq *rq)
1097{
1098#ifdef CONFIG_NO_HZ_FULL
1099 rq->last_sched_tick = jiffies;
1100#endif
1101}
1102
1082extern void update_rq_clock(struct rq *rq); 1103extern void update_rq_clock(struct rq *rq);
1083 1104
1084extern void activate_task(struct rq *rq, struct task_struct *p, int flags); 1105extern void activate_task(struct rq *rq, struct task_struct *p, int flags);
@@ -1299,7 +1320,7 @@ extern void init_rt_rq(struct rt_rq *rt_rq, struct rq *rq);
1299 1320
1300extern void account_cfs_bandwidth_used(int enabled, int was_enabled); 1321extern void account_cfs_bandwidth_used(int enabled, int was_enabled);
1301 1322
1302#ifdef CONFIG_NO_HZ 1323#ifdef CONFIG_NO_HZ_COMMON
1303enum rq_nohz_flag_bits { 1324enum rq_nohz_flag_bits {
1304 NOHZ_TICK_STOPPED, 1325 NOHZ_TICK_STOPPED,
1305 NOHZ_BALANCE_KICK, 1326 NOHZ_BALANCE_KICK,