aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
commit534c97b0950b1967bca1c753aeaed32f5db40264 (patch)
tree9421d26e4f6d479d1bc32b036a731b065daab0fa /include/linux
parent64049d1973c1735f543eb7a55653e291e108b0cb (diff)
parent265f22a975c1e4cc3a4d1f94a3ec53ffbb6f5b9f (diff)
Merge branch 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull 'full dynticks' support from Ingo Molnar: "This tree from Frederic Weisbecker adds a new, (exciting! :-) core kernel feature to the timer and scheduler subsystems: 'full dynticks', or CONFIG_NO_HZ_FULL=y. This feature extends the nohz variable-size timer tick feature from idle to busy CPUs (running at most one task) as well, potentially reducing the number of timer interrupts significantly. This feature got motivated by real-time folks and the -rt tree, but the general utility and motivation of full-dynticks runs wider than that: - HPC workloads get faster: CPUs running a single task should be able to utilize a maximum amount of CPU power. A periodic timer tick at HZ=1000 can cause a constant overhead of up to 1.0%. This feature removes that overhead - and speeds up the system by 0.5%-1.0% on typical distro configs even on modern systems. - Real-time workload latency reduction: CPUs running critical tasks should experience as little jitter as possible. The last remaining source of kernel-related jitter was the periodic timer tick. - A single task executing on a CPU is a pretty common situation, especially with an increasing number of cores/CPUs, so this feature helps desktop and mobile workloads as well. The cost of the feature is mainly related to increased timer reprogramming overhead when a CPU switches its tick period, and thus slightly longer to-idle and from-idle latency. Configuration-wise a third mode of operation is added to the existing two NOHZ kconfig modes: - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named as a config option. This is the traditional Linux periodic tick design: there's a HZ tick going on all the time, regardless of whether a CPU is idle or not. - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the periodic tick when a CPU enters idle mode. - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the tick when a CPU is idle, also slows the tick down to 1 Hz (one timer interrupt per second) when only a single task is running on a CPU. The .config behavior is compatible: existing !CONFIG_NO_HZ and CONFIG_NO_HZ=y settings get translated to the new values, without the user having to configure anything. CONFIG_NO_HZ_FULL is turned off by default. This feature is based on a lot of infrastructure work that has been steadily going upstream in the last 2-3 cycles: related RCU support and non-periodic cputime support in particular is upstream already. This tree adds the final pieces and activates the feature. The pull request is marked RFC because: - it's marked 64-bit only at the moment - the 32-bit support patch is small but did not get ready in time. - it has a number of fresh commits that came in after the merge window. The overwhelming majority of commits are from before the merge window, but still some aspects of the tree are fresh and so I marked it RFC. - it's a pretty wide-reaching feature with lots of effects - and while the components have been in testing for some time, the full combination is still not very widely used. That it's default-off should reduce its regression abilities and obviously there are no known regressions with CONFIG_NO_HZ_FULL=y enabled either. - the feature is not completely idempotent: there is no 100% equivalent replacement for a periodic scheduler/timer tick. In particular there's ongoing work to map out and reduce its effects on scheduler load-balancing and statistics. This should not impact correctness though, there are no known regressions related to this feature at this point. - it's a pretty ambitious feature that with time will likely be enabled by most Linux distros, and we'd like you to make input on its design/implementation, if you dislike some aspect we missed. Without flaming us to crisp! :-) Future plans: - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off the periodic tick altogether when there's a single busy task on a CPU. We'd first like 1 Hz to be exposed more widely before we go for the 0 Hz target though. - once we reach 0 Hz we can remove the periodic tick assumption from nr_running>=2 as well, by essentially interrupting busy tasks only as frequently as the sched_latency constraints require us to do - once every 4-40 msecs, depending on nr_running. I am personally leaning towards biting the bullet and doing this in v3.10, like the -rt tree this effort has been going on for too long - but the final word is up to you as usual. More technical details can be found in Documentation/timers/NO_HZ.txt" * 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) sched: Keep at least 1 tick per second for active dynticks tasks rcu: Fix full dynticks' dependency on wide RCU nocb mode nohz: Protect smp_processor_id() in tick_nohz_task_switch() nohz_full: Add documentation. cputime_nsecs: use math64.h for nsec resolution conversion helpers nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config nohz: Reduce overhead under high-freq idling patterns nohz: Remove full dynticks' superfluous dependency on RCU tree nohz: Fix unavailable tick_stop tracepoint in dynticks idle nohz: Add basic tracing nohz: Select wide RCU nocb for full dynticks nohz: Disable the tick when irq resume in full dynticks CPU nohz: Re-evaluate the tick for the new task after a context switch nohz: Prepare to stop the tick on irq exit nohz: Implement full dynticks kick nohz: Re-evaluate the tick from the scheduler IPI sched: New helper to prevent from stopping the tick in full dynticks sched: Kick full dynticks CPU that have more than one task enqueued. perf: New helper to prevent full dynticks CPUs from stopping tick perf: Kick full dynticks CPU if events rotation is needed ...
Diffstat (limited to 'include/linux')
-rw-r--r--include/linux/perf_event.h6
-rw-r--r--include/linux/posix-timers.h2
-rw-r--r--include/linux/rcupdate.h7
-rw-r--r--include/linux/sched.h19
-rw-r--r--include/linux/tick.h25
5 files changed, 49 insertions, 10 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index e0373d26c244..f463a46424e2 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -788,6 +788,12 @@ static inline int __perf_event_disable(void *info) { return -1; }
788static inline void perf_event_task_tick(void) { } 788static inline void perf_event_task_tick(void) { }
789#endif 789#endif
790 790
791#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_NO_HZ_FULL)
792extern bool perf_event_can_stop_tick(void);
793#else
794static inline bool perf_event_can_stop_tick(void) { return true; }
795#endif
796
791#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) 797#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
792extern void perf_restore_debug_store(void); 798extern void perf_restore_debug_store(void);
793#else 799#else
diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
index 60bac697a91b..7794d75ed155 100644
--- a/include/linux/posix-timers.h
+++ b/include/linux/posix-timers.h
@@ -123,6 +123,8 @@ void run_posix_cpu_timers(struct task_struct *task);
123void posix_cpu_timers_exit(struct task_struct *task); 123void posix_cpu_timers_exit(struct task_struct *task);
124void posix_cpu_timers_exit_group(struct task_struct *task); 124void posix_cpu_timers_exit_group(struct task_struct *task);
125 125
126bool posix_cpu_timers_can_stop_tick(struct task_struct *tsk);
127
126void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx, 128void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx,
127 cputime_t *newval, cputime_t *oldval); 129 cputime_t *newval, cputime_t *oldval);
128 130
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 9ed2c9a4de45..4ccd68e49b00 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1000,4 +1000,11 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
1000#define kfree_rcu(ptr, rcu_head) \ 1000#define kfree_rcu(ptr, rcu_head) \
1001 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) 1001 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
1002 1002
1003#ifdef CONFIG_RCU_NOCB_CPU
1004extern bool rcu_is_nocb_cpu(int cpu);
1005#else
1006static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
1007#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
1008
1009
1003#endif /* __LINUX_RCUPDATE_H */ 1010#endif /* __LINUX_RCUPDATE_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6f950048b6e9..4800e9d1864c 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -231,7 +231,7 @@ extern void init_idle_bootup_task(struct task_struct *idle);
231 231
232extern int runqueue_is_locked(int cpu); 232extern int runqueue_is_locked(int cpu);
233 233
234#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) 234#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
235extern void nohz_balance_enter_idle(int cpu); 235extern void nohz_balance_enter_idle(int cpu);
236extern void set_cpu_sd_state_idle(void); 236extern void set_cpu_sd_state_idle(void);
237extern int get_nohz_timer_target(void); 237extern int get_nohz_timer_target(void);
@@ -1764,13 +1764,13 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p,
1764} 1764}
1765#endif 1765#endif
1766 1766
1767#ifdef CONFIG_NO_HZ 1767#ifdef CONFIG_NO_HZ_COMMON
1768void calc_load_enter_idle(void); 1768void calc_load_enter_idle(void);
1769void calc_load_exit_idle(void); 1769void calc_load_exit_idle(void);
1770#else 1770#else
1771static inline void calc_load_enter_idle(void) { } 1771static inline void calc_load_enter_idle(void) { }
1772static inline void calc_load_exit_idle(void) { } 1772static inline void calc_load_exit_idle(void) { }
1773#endif /* CONFIG_NO_HZ */ 1773#endif /* CONFIG_NO_HZ_COMMON */
1774 1774
1775#ifndef CONFIG_CPUMASK_OFFSTACK 1775#ifndef CONFIG_CPUMASK_OFFSTACK
1776static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask) 1776static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
@@ -1856,10 +1856,17 @@ extern void idle_task_exit(void);
1856static inline void idle_task_exit(void) {} 1856static inline void idle_task_exit(void) {}
1857#endif 1857#endif
1858 1858
1859#if defined(CONFIG_NO_HZ) && defined(CONFIG_SMP) 1859#if defined(CONFIG_NO_HZ_COMMON) && defined(CONFIG_SMP)
1860extern void wake_up_idle_cpu(int cpu); 1860extern void wake_up_nohz_cpu(int cpu);
1861#else 1861#else
1862static inline void wake_up_idle_cpu(int cpu) { } 1862static inline void wake_up_nohz_cpu(int cpu) { }
1863#endif
1864
1865#ifdef CONFIG_NO_HZ_FULL
1866extern bool sched_can_stop_tick(void);
1867extern u64 scheduler_tick_max_deferment(void);
1868#else
1869static inline bool sched_can_stop_tick(void) { return false; }
1863#endif 1870#endif
1864 1871
1865#ifdef CONFIG_SCHED_AUTOGROUP 1872#ifdef CONFIG_SCHED_AUTOGROUP
diff --git a/include/linux/tick.h b/include/linux/tick.h
index 553272e6af55..9180f4b85e6d 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -82,7 +82,7 @@ extern int tick_program_event(ktime_t expires, int force);
82extern void tick_setup_sched_timer(void); 82extern void tick_setup_sched_timer(void);
83# endif 83# endif
84 84
85# if defined CONFIG_NO_HZ || defined CONFIG_HIGH_RES_TIMERS 85# if defined CONFIG_NO_HZ_COMMON || defined CONFIG_HIGH_RES_TIMERS
86extern void tick_cancel_sched_timer(int cpu); 86extern void tick_cancel_sched_timer(int cpu);
87# else 87# else
88static inline void tick_cancel_sched_timer(int cpu) { } 88static inline void tick_cancel_sched_timer(int cpu) { }
@@ -123,7 +123,7 @@ static inline void tick_check_idle(int cpu) { }
123static inline int tick_oneshot_mode_active(void) { return 0; } 123static inline int tick_oneshot_mode_active(void) { return 0; }
124#endif /* !CONFIG_GENERIC_CLOCKEVENTS */ 124#endif /* !CONFIG_GENERIC_CLOCKEVENTS */
125 125
126# ifdef CONFIG_NO_HZ 126# ifdef CONFIG_NO_HZ_COMMON
127DECLARE_PER_CPU(struct tick_sched, tick_cpu_sched); 127DECLARE_PER_CPU(struct tick_sched, tick_cpu_sched);
128 128
129static inline int tick_nohz_tick_stopped(void) 129static inline int tick_nohz_tick_stopped(void)
@@ -138,7 +138,7 @@ extern ktime_t tick_nohz_get_sleep_length(void);
138extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); 138extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
139extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); 139extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
140 140
141# else /* !CONFIG_NO_HZ */ 141# else /* !CONFIG_NO_HZ_COMMON */
142static inline int tick_nohz_tick_stopped(void) 142static inline int tick_nohz_tick_stopped(void)
143{ 143{
144 return 0; 144 return 0;
@@ -155,7 +155,24 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
155} 155}
156static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; } 156static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
157static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; } 157static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
158# endif /* !NO_HZ */ 158# endif /* !CONFIG_NO_HZ_COMMON */
159
160#ifdef CONFIG_NO_HZ_FULL
161extern void tick_nohz_init(void);
162extern int tick_nohz_full_cpu(int cpu);
163extern void tick_nohz_full_check(void);
164extern void tick_nohz_full_kick(void);
165extern void tick_nohz_full_kick_all(void);
166extern void tick_nohz_task_switch(struct task_struct *tsk);
167#else
168static inline void tick_nohz_init(void) { }
169static inline int tick_nohz_full_cpu(int cpu) { return 0; }
170static inline void tick_nohz_full_check(void) { }
171static inline void tick_nohz_full_kick(void) { }
172static inline void tick_nohz_full_kick_all(void) { }
173static inline void tick_nohz_task_switch(struct task_struct *tsk) { }
174#endif
175
159 176
160# ifdef CONFIG_CPU_IDLE_GOV_MENU 177# ifdef CONFIG_CPU_IDLE_GOV_MENU
161extern void menu_hrtimer_cancel(void); 178extern void menu_hrtimer_cancel(void);