aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-05-05 16:23:27 -0400
commit534c97b0950b1967bca1c753aeaed32f5db40264 (patch)
tree9421d26e4f6d479d1bc32b036a731b065daab0fa /include
parent64049d1973c1735f543eb7a55653e291e108b0cb (diff)
parent265f22a975c1e4cc3a4d1f94a3ec53ffbb6f5b9f (diff)
Merge branch 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull 'full dynticks' support from Ingo Molnar: "This tree from Frederic Weisbecker adds a new, (exciting! :-) core kernel feature to the timer and scheduler subsystems: 'full dynticks', or CONFIG_NO_HZ_FULL=y. This feature extends the nohz variable-size timer tick feature from idle to busy CPUs (running at most one task) as well, potentially reducing the number of timer interrupts significantly. This feature got motivated by real-time folks and the -rt tree, but the general utility and motivation of full-dynticks runs wider than that: - HPC workloads get faster: CPUs running a single task should be able to utilize a maximum amount of CPU power. A periodic timer tick at HZ=1000 can cause a constant overhead of up to 1.0%. This feature removes that overhead - and speeds up the system by 0.5%-1.0% on typical distro configs even on modern systems. - Real-time workload latency reduction: CPUs running critical tasks should experience as little jitter as possible. The last remaining source of kernel-related jitter was the periodic timer tick. - A single task executing on a CPU is a pretty common situation, especially with an increasing number of cores/CPUs, so this feature helps desktop and mobile workloads as well. The cost of the feature is mainly related to increased timer reprogramming overhead when a CPU switches its tick period, and thus slightly longer to-idle and from-idle latency. Configuration-wise a third mode of operation is added to the existing two NOHZ kconfig modes: - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named as a config option. This is the traditional Linux periodic tick design: there's a HZ tick going on all the time, regardless of whether a CPU is idle or not. - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the periodic tick when a CPU enters idle mode. - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the tick when a CPU is idle, also slows the tick down to 1 Hz (one timer interrupt per second) when only a single task is running on a CPU. The .config behavior is compatible: existing !CONFIG_NO_HZ and CONFIG_NO_HZ=y settings get translated to the new values, without the user having to configure anything. CONFIG_NO_HZ_FULL is turned off by default. This feature is based on a lot of infrastructure work that has been steadily going upstream in the last 2-3 cycles: related RCU support and non-periodic cputime support in particular is upstream already. This tree adds the final pieces and activates the feature. The pull request is marked RFC because: - it's marked 64-bit only at the moment - the 32-bit support patch is small but did not get ready in time. - it has a number of fresh commits that came in after the merge window. The overwhelming majority of commits are from before the merge window, but still some aspects of the tree are fresh and so I marked it RFC. - it's a pretty wide-reaching feature with lots of effects - and while the components have been in testing for some time, the full combination is still not very widely used. That it's default-off should reduce its regression abilities and obviously there are no known regressions with CONFIG_NO_HZ_FULL=y enabled either. - the feature is not completely idempotent: there is no 100% equivalent replacement for a periodic scheduler/timer tick. In particular there's ongoing work to map out and reduce its effects on scheduler load-balancing and statistics. This should not impact correctness though, there are no known regressions related to this feature at this point. - it's a pretty ambitious feature that with time will likely be enabled by most Linux distros, and we'd like you to make input on its design/implementation, if you dislike some aspect we missed. Without flaming us to crisp! :-) Future plans: - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off the periodic tick altogether when there's a single busy task on a CPU. We'd first like 1 Hz to be exposed more widely before we go for the 0 Hz target though. - once we reach 0 Hz we can remove the periodic tick assumption from nr_running>=2 as well, by essentially interrupting busy tasks only as frequently as the sched_latency constraints require us to do - once every 4-40 msecs, depending on nr_running. I am personally leaning towards biting the bullet and doing this in v3.10, like the -rt tree this effort has been going on for too long - but the final word is up to you as usual. More technical details can be found in Documentation/timers/NO_HZ.txt" * 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) sched: Keep at least 1 tick per second for active dynticks tasks rcu: Fix full dynticks' dependency on wide RCU nocb mode nohz: Protect smp_processor_id() in tick_nohz_task_switch() nohz_full: Add documentation. cputime_nsecs: use math64.h for nsec resolution conversion helpers nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config nohz: Reduce overhead under high-freq idling patterns nohz: Remove full dynticks' superfluous dependency on RCU tree nohz: Fix unavailable tick_stop tracepoint in dynticks idle nohz: Add basic tracing nohz: Select wide RCU nocb for full dynticks nohz: Disable the tick when irq resume in full dynticks CPU nohz: Re-evaluate the tick for the new task after a context switch nohz: Prepare to stop the tick on irq exit nohz: Implement full dynticks kick nohz: Re-evaluate the tick from the scheduler IPI sched: New helper to prevent from stopping the tick in full dynticks sched: Kick full dynticks CPU that have more than one task enqueued. perf: New helper to prevent full dynticks CPUs from stopping tick perf: Kick full dynticks CPU if events rotation is needed ...
Diffstat (limited to 'include')
-rw-r--r--include/asm-generic/cputime_nsecs.h28
-rw-r--r--include/linux/perf_event.h6
-rw-r--r--include/linux/posix-timers.h2
-rw-r--r--include/linux/rcupdate.h7
-rw-r--r--include/linux/sched.h19
-rw-r--r--include/linux/tick.h25
-rw-r--r--include/trace/events/timer.h21
7 files changed, 89 insertions, 19 deletions
diff --git a/include/asm-generic/cputime_nsecs.h b/include/asm-generic/cputime_nsecs.h
index a8ece9a33aef..2c9e62c2bfd0 100644
--- a/include/asm-generic/cputime_nsecs.h
+++ b/include/asm-generic/cputime_nsecs.h
@@ -16,21 +16,27 @@
16#ifndef _ASM_GENERIC_CPUTIME_NSECS_H 16#ifndef _ASM_GENERIC_CPUTIME_NSECS_H
17#define _ASM_GENERIC_CPUTIME_NSECS_H 17#define _ASM_GENERIC_CPUTIME_NSECS_H
18 18
19#include <linux/math64.h>
20
19typedef u64 __nocast cputime_t; 21typedef u64 __nocast cputime_t;
20typedef u64 __nocast cputime64_t; 22typedef u64 __nocast cputime64_t;
21 23
22#define cputime_one_jiffy jiffies_to_cputime(1) 24#define cputime_one_jiffy jiffies_to_cputime(1)
23 25
26#define cputime_div(__ct, divisor) div_u64((__force u64)__ct, divisor)
27#define cputime_div_rem(__ct, divisor, remainder) \
28 div_u64_rem((__force u64)__ct, divisor, remainder);
29
24/* 30/*
25 * Convert cputime <-> jiffies (HZ) 31 * Convert cputime <-> jiffies (HZ)
26 */ 32 */
27#define cputime_to_jiffies(__ct) \ 33#define cputime_to_jiffies(__ct) \
28 ((__force u64)(__ct) / (NSEC_PER_SEC / HZ)) 34 cputime_div(__ct, NSEC_PER_SEC / HZ)
29#define cputime_to_scaled(__ct) (__ct) 35#define cputime_to_scaled(__ct) (__ct)
30#define jiffies_to_cputime(__jif) \ 36#define jiffies_to_cputime(__jif) \
31 (__force cputime_t)((__jif) * (NSEC_PER_SEC / HZ)) 37 (__force cputime_t)((__jif) * (NSEC_PER_SEC / HZ))
32#define cputime64_to_jiffies64(__ct) \ 38#define cputime64_to_jiffies64(__ct) \
33 ((__force u64)(__ct) / (NSEC_PER_SEC / HZ)) 39 cputime_div(__ct, NSEC_PER_SEC / HZ)
34#define jiffies64_to_cputime64(__jif) \ 40#define jiffies64_to_cputime64(__jif) \
35 (__force cputime64_t)((__jif) * (NSEC_PER_SEC / HZ)) 41 (__force cputime64_t)((__jif) * (NSEC_PER_SEC / HZ))
36 42
@@ -45,7 +51,7 @@ typedef u64 __nocast cputime64_t;
45 * Convert cputime <-> microseconds 51 * Convert cputime <-> microseconds
46 */ 52 */
47#define cputime_to_usecs(__ct) \ 53#define cputime_to_usecs(__ct) \
48 ((__force u64)(__ct) / NSEC_PER_USEC) 54 cputime_div(__ct, NSEC_PER_USEC)
49#define usecs_to_cputime(__usecs) \ 55#define usecs_to_cputime(__usecs) \
50 (__force cputime_t)((__usecs) * NSEC_PER_USEC) 56 (__force cputime_t)((__usecs) * NSEC_PER_USEC)
51#define usecs_to_cputime64(__usecs) \ 57#define usecs_to_cputime64(__usecs) \
@@ -55,7 +61,7 @@ typedef u64 __nocast cputime64_t;
55 * Convert cputime <-> seconds 61 * Convert cputime <-> seconds
56 */ 62 */
57#define cputime_to_secs(__ct) \ 63#define cputime_to_secs(__ct) \
58 ((__force u64)(__ct) / NSEC_PER_SEC) 64 cputime_div(__ct, NSEC_PER_SEC)
59#define secs_to_cputime(__secs) \ 65#define secs_to_cputime(__secs) \
60 (__force cputime_t)((__secs) * NSEC_PER_SEC) 66 (__force cputime_t)((__secs) * NSEC_PER_SEC)
61 67
@@ -69,8 +75,10 @@ static inline cputime_t timespec_to_cputime(const struct timespec *val)
69} 75}
70static inline void cputime_to_timespec(const cputime_t ct, struct timespec *val) 76static inline void cputime_to_timespec(const cputime_t ct, struct timespec *val)
71{ 77{
72 val->tv_sec = (__force u64) ct / NSEC_PER_SEC; 78 u32 rem;
73 val->tv_nsec = (__force u64) ct % NSEC_PER_SEC; 79
80 val->tv_sec = cputime_div_rem(ct, NSEC_PER_SEC, &rem);
81 val->tv_nsec = rem;
74} 82}
75 83
76/* 84/*
@@ -83,15 +91,17 @@ static inline cputime_t timeval_to_cputime(const struct timeval *val)
83} 91}
84static inline void cputime_to_timeval(const cputime_t ct, struct timeval *val) 92static inline void cputime_to_timeval(const cputime_t ct, struct timeval *val)
85{ 93{
86 val->tv_sec = (__force u64) ct / NSEC_PER_SEC; 94 u32 rem;
87 val->tv_usec = ((__force u64) ct % NSEC_PER_SEC) / NSEC_PER_USEC; 95
96 val->tv_sec = cputime_div_rem(ct, NSEC_PER_SEC, &rem);
97 val->tv_usec = rem / NSEC_PER_USEC;
88} 98}
89 99
90/* 100/*
91 * Convert cputime <-> clock (USER_HZ) 101 * Convert cputime <-> clock (USER_HZ)
92 */ 102 */
93#define cputime_to_clock_t(__ct) \ 103#define cputime_to_clock_t(__ct) \
94 ((__force u64)(__ct) / (NSEC_PER_SEC / USER_HZ)) 104 cputime_div(__ct, (NSEC_PER_SEC / USER_HZ))
95#define clock_t_to_cputime(__x) \ 105#define clock_t_to_cputime(__x) \
96 (__force cputime_t)((__x) * (NSEC_PER_SEC / USER_HZ)) 106 (__force cputime_t)((__x) * (NSEC_PER_SEC / USER_HZ))
97 107
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index e0373d26c244..f463a46424e2 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -788,6 +788,12 @@ static inline int __perf_event_disable(void *info) { return -1; }
788static inline void perf_event_task_tick(void) { } 788static inline void perf_event_task_tick(void) { }
789#endif 789#endif
790 790
791#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_NO_HZ_FULL)
792extern bool perf_event_can_stop_tick(void);
793#else
794static inline bool perf_event_can_stop_tick(void) { return true; }
795#endif
796
791#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) 797#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
792extern void perf_restore_debug_store(void); 798extern void perf_restore_debug_store(void);
793#else 799#else
diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
index 60bac697a91b..7794d75ed155 100644
--- a/include/linux/posix-timers.h
+++ b/include/linux/posix-timers.h
@@ -123,6 +123,8 @@ void run_posix_cpu_timers(struct task_struct *task);
123void posix_cpu_timers_exit(struct task_struct *task); 123void posix_cpu_timers_exit(struct task_struct *task);
124void posix_cpu_timers_exit_group(struct task_struct *task); 124void posix_cpu_timers_exit_group(struct task_struct *task);
125 125
126bool posix_cpu_timers_can_stop_tick(struct task_struct *tsk);
127
126void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx, 128void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx,
127 cputime_t *newval, cputime_t *oldval); 129 cputime_t *newval, cputime_t *oldval);
128 130
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 9ed2c9a4de45..4ccd68e49b00 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1000,4 +1000,11 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
1000#define kfree_rcu(ptr, rcu_head) \ 1000#define kfree_rcu(ptr, rcu_head) \
1001 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) 1001 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
1002 1002
1003#ifdef CONFIG_RCU_NOCB_CPU
1004extern bool rcu_is_nocb_cpu(int cpu);
1005#else
1006static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
1007#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
1008
1009
1003#endif /* __LINUX_RCUPDATE_H */ 1010#endif /* __LINUX_RCUPDATE_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6f950048b6e9..4800e9d1864c 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -231,7 +231,7 @@ extern void init_idle_bootup_task(struct task_struct *idle);
231 231
232extern int runqueue_is_locked(int cpu); 232extern int runqueue_is_locked(int cpu);
233 233
234#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) 234#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
235extern void nohz_balance_enter_idle(int cpu); 235extern void nohz_balance_enter_idle(int cpu);
236extern void set_cpu_sd_state_idle(void); 236extern void set_cpu_sd_state_idle(void);
237extern int get_nohz_timer_target(void); 237extern int get_nohz_timer_target(void);
@@ -1764,13 +1764,13 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p,
1764} 1764}
1765#endif 1765#endif
1766 1766
1767#ifdef CONFIG_NO_HZ 1767#ifdef CONFIG_NO_HZ_COMMON
1768void calc_load_enter_idle(void); 1768void calc_load_enter_idle(void);
1769void calc_load_exit_idle(void); 1769void calc_load_exit_idle(void);
1770#else 1770#else
1771static inline void calc_load_enter_idle(void) { } 1771static inline void calc_load_enter_idle(void) { }
1772static inline void calc_load_exit_idle(void) { } 1772static inline void calc_load_exit_idle(void) { }
1773#endif /* CONFIG_NO_HZ */ 1773#endif /* CONFIG_NO_HZ_COMMON */
1774 1774
1775#ifndef CONFIG_CPUMASK_OFFSTACK 1775#ifndef CONFIG_CPUMASK_OFFSTACK
1776static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask) 1776static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
@@ -1856,10 +1856,17 @@ extern void idle_task_exit(void);
1856static inline void idle_task_exit(void) {} 1856static inline void idle_task_exit(void) {}
1857#endif 1857#endif
1858 1858
1859#if defined(CONFIG_NO_HZ) && defined(CONFIG_SMP) 1859#if defined(CONFIG_NO_HZ_COMMON) && defined(CONFIG_SMP)
1860extern void wake_up_idle_cpu(int cpu); 1860extern void wake_up_nohz_cpu(int cpu);
1861#else 1861#else
1862static inline void wake_up_idle_cpu(int cpu) { } 1862static inline void wake_up_nohz_cpu(int cpu) { }
1863#endif
1864
1865#ifdef CONFIG_NO_HZ_FULL
1866extern bool sched_can_stop_tick(void);
1867extern u64 scheduler_tick_max_deferment(void);
1868#else
1869static inline bool sched_can_stop_tick(void) { return false; }
1863#endif 1870#endif
1864 1871
1865#ifdef CONFIG_SCHED_AUTOGROUP 1872#ifdef CONFIG_SCHED_AUTOGROUP
diff --git a/include/linux/tick.h b/include/linux/tick.h
index 553272e6af55..9180f4b85e6d 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -82,7 +82,7 @@ extern int tick_program_event(ktime_t expires, int force);
82extern void tick_setup_sched_timer(void); 82extern void tick_setup_sched_timer(void);
83# endif 83# endif
84 84
85# if defined CONFIG_NO_HZ || defined CONFIG_HIGH_RES_TIMERS 85# if defined CONFIG_NO_HZ_COMMON || defined CONFIG_HIGH_RES_TIMERS
86extern void tick_cancel_sched_timer(int cpu); 86extern void tick_cancel_sched_timer(int cpu);
87# else 87# else
88static inline void tick_cancel_sched_timer(int cpu) { } 88static inline void tick_cancel_sched_timer(int cpu) { }
@@ -123,7 +123,7 @@ static inline void tick_check_idle(int cpu) { }
123static inline int tick_oneshot_mode_active(void) { return 0; } 123static inline int tick_oneshot_mode_active(void) { return 0; }
124#endif /* !CONFIG_GENERIC_CLOCKEVENTS */ 124#endif /* !CONFIG_GENERIC_CLOCKEVENTS */
125 125
126# ifdef CONFIG_NO_HZ 126# ifdef CONFIG_NO_HZ_COMMON
127DECLARE_PER_CPU(struct tick_sched, tick_cpu_sched); 127DECLARE_PER_CPU(struct tick_sched, tick_cpu_sched);
128 128
129static inline int tick_nohz_tick_stopped(void) 129static inline int tick_nohz_tick_stopped(void)
@@ -138,7 +138,7 @@ extern ktime_t tick_nohz_get_sleep_length(void);
138extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); 138extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
139extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); 139extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
140 140
141# else /* !CONFIG_NO_HZ */ 141# else /* !CONFIG_NO_HZ_COMMON */
142static inline int tick_nohz_tick_stopped(void) 142static inline int tick_nohz_tick_stopped(void)
143{ 143{
144 return 0; 144 return 0;
@@ -155,7 +155,24 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
155} 155}
156static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; } 156static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
157static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; } 157static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
158# endif /* !NO_HZ */ 158# endif /* !CONFIG_NO_HZ_COMMON */
159
160#ifdef CONFIG_NO_HZ_FULL
161extern void tick_nohz_init(void);
162extern int tick_nohz_full_cpu(int cpu);
163extern void tick_nohz_full_check(void);
164extern void tick_nohz_full_kick(void);
165extern void tick_nohz_full_kick_all(void);
166extern void tick_nohz_task_switch(struct task_struct *tsk);
167#else
168static inline void tick_nohz_init(void) { }
169static inline int tick_nohz_full_cpu(int cpu) { return 0; }
170static inline void tick_nohz_full_check(void) { }
171static inline void tick_nohz_full_kick(void) { }
172static inline void tick_nohz_full_kick_all(void) { }
173static inline void tick_nohz_task_switch(struct task_struct *tsk) { }
174#endif
175
159 176
160# ifdef CONFIG_CPU_IDLE_GOV_MENU 177# ifdef CONFIG_CPU_IDLE_GOV_MENU
161extern void menu_hrtimer_cancel(void); 178extern void menu_hrtimer_cancel(void);
diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
index 8d219470624f..68c2c2000f02 100644
--- a/include/trace/events/timer.h
+++ b/include/trace/events/timer.h
@@ -323,6 +323,27 @@ TRACE_EVENT(itimer_expire,
323 (int) __entry->pid, (unsigned long long)__entry->now) 323 (int) __entry->pid, (unsigned long long)__entry->now)
324); 324);
325 325
326#ifdef CONFIG_NO_HZ_COMMON
327TRACE_EVENT(tick_stop,
328
329 TP_PROTO(int success, char *error_msg),
330
331 TP_ARGS(success, error_msg),
332
333 TP_STRUCT__entry(
334 __field( int , success )
335 __string( msg, error_msg )
336 ),
337
338 TP_fast_assign(
339 __entry->success = success;
340 __assign_str(msg, error_msg);
341 ),
342
343 TP_printk("success=%s msg=%s", __entry->success ? "yes" : "no", __get_str(msg))
344);
345#endif
346
326#endif /* _TRACE_TIMER_H */ 347#endif /* _TRACE_TIMER_H */
327 348
328/* This part must be outside protection */ 349/* This part must be outside protection */