diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-20 13:06:32 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-20 13:06:32 -0500 |
commit | 20dcfe1b7df4072a3c13bdb7506f7138125d0099 (patch) | |
tree | b7a206aeb59240622a5f24e2c54f3d98c37caba2 | |
parent | c9b9f207b90468bf9583f7ed71c15d0142bbf9b1 (diff) | |
parent | 336a9cde10d641e70bac67d90ae91b3190c3edca (diff) |
Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer updates from Thomas Gleixner:
"Nothing exciting, just the usual pile of fixes, updates and cleanups:
- A bunch of clocksource driver updates
- Removal of CONFIG_TIMER_STATS and the related /proc file
- More posix timer slim down work
- A scalability enhancement in the tick broadcast code
- Math cleanups"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits)
hrtimer: Catch invalid clockids again
math64, tile: Fix build failure
clocksource/drivers/arm_arch_timer:: Mark cyclecounter __ro_after_init
timerfd: Protect the might cancel mechanism proper
timer_list: Remove useless cast when printing
time: Remove CONFIG_TIMER_STATS
clocksource/drivers/arm_arch_timer: Work around Hisilicon erratum 161010101
clocksource/drivers/arm_arch_timer: Introduce generic errata handling infrastructure
clocksource/drivers/arm_arch_timer: Remove fsl-a008585 parameter
clocksource/drivers/arm_arch_timer: Add dt binding for hisilicon-161010101 erratum
clocksource/drivers/ostm: Add renesas-ostm timer driver
clocksource/drivers/ostm: Document renesas-ostm timer DT bindings
clocksource/drivers/tcb_clksrc: Use 32 bit tcb as sched_clock
clocksource/drivers/gemini: Add driver for the Cortina Gemini
clocksource: add DT bindings for Cortina Gemini
clockevents: Add a clkevt-of mechanism like clksrc-of
tick/broadcast: Reduce lock cacheline contention
timers: Omit POSIX timer stuff from task_struct when disabled
x86/timer: Make delay() work during early bootup
delay: Add explanation of udelay() inaccuracy
...
43 files changed, 1022 insertions, 846 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index be7c0d9506b1..d8fc55aa9d44 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt | |||
@@ -549,15 +549,6 @@ | |||
549 | loops can be debugged more effectively on production | 549 | loops can be debugged more effectively on production |
550 | systems. | 550 | systems. |
551 | 551 | ||
552 | clocksource.arm_arch_timer.fsl-a008585= | ||
553 | [ARM64] | ||
554 | Format: <bool> | ||
555 | Enable/disable the workaround of Freescale/NXP | ||
556 | erratum A-008585. This can be useful for KVM | ||
557 | guests, if the guest device tree doesn't show the | ||
558 | erratum. If unspecified, the workaround is | ||
559 | enabled based on the device tree. | ||
560 | |||
561 | clearcpuid=BITNUM [X86] | 552 | clearcpuid=BITNUM [X86] |
562 | Disable CPUID feature X for the kernel. See | 553 | Disable CPUID feature X for the kernel. See |
563 | arch/x86/include/asm/cpufeatures.h for the valid bit | 554 | arch/x86/include/asm/cpufeatures.h for the valid bit |
diff --git a/Documentation/devicetree/bindings/arm/arch_timer.txt b/Documentation/devicetree/bindings/arm/arch_timer.txt index ad440a2b8051..e926aea1147d 100644 --- a/Documentation/devicetree/bindings/arm/arch_timer.txt +++ b/Documentation/devicetree/bindings/arm/arch_timer.txt | |||
@@ -31,6 +31,12 @@ to deliver its interrupts via SPIs. | |||
31 | This also affects writes to the tval register, due to the implicit | 31 | This also affects writes to the tval register, due to the implicit |
32 | counter read. | 32 | counter read. |
33 | 33 | ||
34 | - hisilicon,erratum-161010101 : A boolean property. Indicates the | ||
35 | presence of Hisilicon erratum 161010101, which says that reading the | ||
36 | counters is unreliable in some cases, and reads may return a value 32 | ||
37 | beyond the correct value. This also affects writes to the tval | ||
38 | registers, due to the implicit counter read. | ||
39 | |||
34 | ** Optional properties: | 40 | ** Optional properties: |
35 | 41 | ||
36 | - arm,cpu-registers-not-fw-configured : Firmware does not initialize | 42 | - arm,cpu-registers-not-fw-configured : Firmware does not initialize |
diff --git a/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt b/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt new file mode 100644 index 000000000000..16ea1d3b2e9e --- /dev/null +++ b/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt | |||
@@ -0,0 +1,22 @@ | |||
1 | Cortina Systems Gemini timer | ||
2 | |||
3 | This timer is embedded in the Cortina Systems Gemini SoCs. | ||
4 | |||
5 | Required properties: | ||
6 | |||
7 | - compatible : Must be "cortina,gemini-timer" | ||
8 | - reg : Should contain registers location and length | ||
9 | - interrupts : Should contain the three timer interrupts with | ||
10 | flags for rising edge | ||
11 | - syscon : a phandle to the global Gemini system controller | ||
12 | |||
13 | Example: | ||
14 | |||
15 | timer@43000000 { | ||
16 | compatible = "cortina,gemini-timer"; | ||
17 | reg = <0x43000000 0x1000>; | ||
18 | interrupts = <14 IRQ_TYPE_EDGE_RISING>, /* Timer 1 */ | ||
19 | <15 IRQ_TYPE_EDGE_RISING>, /* Timer 2 */ | ||
20 | <16 IRQ_TYPE_EDGE_RISING>; /* Timer 3 */ | ||
21 | syscon = <&syscon>; | ||
22 | }; | ||
diff --git a/Documentation/devicetree/bindings/timer/renesas,ostm.txt b/Documentation/devicetree/bindings/timer/renesas,ostm.txt new file mode 100644 index 000000000000..be3ae0fdf775 --- /dev/null +++ b/Documentation/devicetree/bindings/timer/renesas,ostm.txt | |||
@@ -0,0 +1,30 @@ | |||
1 | * Renesas OS Timer (OSTM) | ||
2 | |||
3 | The OSTM is a multi-channel 32-bit timer/counter with fixed clock | ||
4 | source that can operate in either interval count down timer or free-running | ||
5 | compare match mode. | ||
6 | |||
7 | Channels are independent from each other. | ||
8 | |||
9 | Required Properties: | ||
10 | |||
11 | - compatible: must be one or more of the following: | ||
12 | - "renesas,r7s72100-ostm" for the r7s72100 OSTM | ||
13 | - "renesas,ostm" for any OSTM | ||
14 | This is a fallback for the above renesas,*-ostm entries | ||
15 | |||
16 | - reg: base address and length of the register block for a timer channel. | ||
17 | |||
18 | - interrupts: interrupt specifier for the timer channel. | ||
19 | |||
20 | - clocks: clock specifier for the timer channel. | ||
21 | |||
22 | Example: R7S72100 (RZ/A1H) OSTM node | ||
23 | |||
24 | ostm0: timer@fcfec000 { | ||
25 | compatible = "renesas,r7s72100-ostm", "renesas,ostm"; | ||
26 | reg = <0xfcfec000 0x30>; | ||
27 | interrupts = <GIC_SPI 102 IRQ_TYPE_EDGE_RISING>; | ||
28 | clocks = <&mstp5_clks R7S72100_CLK_OSTM0>; | ||
29 | power-domains = <&cpg_clocks>; | ||
30 | }; | ||
diff --git a/Documentation/timers/timer_stats.txt b/Documentation/timers/timer_stats.txt deleted file mode 100644 index de835ee97455..000000000000 --- a/Documentation/timers/timer_stats.txt +++ /dev/null | |||
@@ -1,73 +0,0 @@ | |||
1 | timer_stats - timer usage statistics | ||
2 | ------------------------------------ | ||
3 | |||
4 | timer_stats is a debugging facility to make the timer (ab)usage in a Linux | ||
5 | system visible to kernel and userspace developers. If enabled in the config | ||
6 | but not used it has almost zero runtime overhead, and a relatively small | ||
7 | data structure overhead. Even if collection is enabled runtime all the | ||
8 | locking is per-CPU and lookup is hashed. | ||
9 | |||
10 | timer_stats should be used by kernel and userspace developers to verify that | ||
11 | their code does not make unduly use of timers. This helps to avoid unnecessary | ||
12 | wakeups, which should be avoided to optimize power consumption. | ||
13 | |||
14 | It can be enabled by CONFIG_TIMER_STATS in the "Kernel hacking" configuration | ||
15 | section. | ||
16 | |||
17 | timer_stats collects information about the timer events which are fired in a | ||
18 | Linux system over a sample period: | ||
19 | |||
20 | - the pid of the task(process) which initialized the timer | ||
21 | - the name of the process which initialized the timer | ||
22 | - the function where the timer was initialized | ||
23 | - the callback function which is associated to the timer | ||
24 | - the number of events (callbacks) | ||
25 | |||
26 | timer_stats adds an entry to /proc: /proc/timer_stats | ||
27 | |||
28 | This entry is used to control the statistics functionality and to read out the | ||
29 | sampled information. | ||
30 | |||
31 | The timer_stats functionality is inactive on bootup. | ||
32 | |||
33 | To activate a sample period issue: | ||
34 | # echo 1 >/proc/timer_stats | ||
35 | |||
36 | To stop a sample period issue: | ||
37 | # echo 0 >/proc/timer_stats | ||
38 | |||
39 | The statistics can be retrieved by: | ||
40 | # cat /proc/timer_stats | ||
41 | |||
42 | While sampling is enabled, each readout from /proc/timer_stats will see | ||
43 | newly updated statistics. Once sampling is disabled, the sampled information | ||
44 | is kept until a new sample period is started. This allows multiple readouts. | ||
45 | |||
46 | Sample output of /proc/timer_stats: | ||
47 | |||
48 | Timerstats sample period: 3.888770 s | ||
49 | 12, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick) | ||
50 | 15, 1 swapper hcd_submit_urb (rh_timer_func) | ||
51 | 4, 959 kedac schedule_timeout (process_timeout) | ||
52 | 1, 0 swapper page_writeback_init (wb_timer_fn) | ||
53 | 28, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick) | ||
54 | 22, 2948 IRQ 4 tty_flip_buffer_push (delayed_work_timer_fn) | ||
55 | 3, 3100 bash schedule_timeout (process_timeout) | ||
56 | 1, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) | ||
57 | 1, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) | ||
58 | 1, 1 swapper neigh_table_init_no_netlink (neigh_periodic_timer) | ||
59 | 1, 2292 ip __netdev_watchdog_up (dev_watchdog) | ||
60 | 1, 23 events/1 do_cache_clean (delayed_work_timer_fn) | ||
61 | 90 total events, 30.0 events/sec | ||
62 | |||
63 | The first column is the number of events, the second column the pid, the third | ||
64 | column is the name of the process. The forth column shows the function which | ||
65 | initialized the timer and in parenthesis the callback function which was | ||
66 | executed on expiry. | ||
67 | |||
68 | Thomas, Ingo | ||
69 | |||
70 | Added flag to indicate 'deferrable timer' in /proc/timer_stats. A deferrable | ||
71 | timer will appear as follows | ||
72 | 10D, 1 swapper queue_delayed_work_on (delayed_work_timer_fn) | ||
73 | |||
diff --git a/arch/arm/mach-shmobile/Kconfig b/arch/arm/mach-shmobile/Kconfig index 2bb4b09f079e..ad7d604ff001 100644 --- a/arch/arm/mach-shmobile/Kconfig +++ b/arch/arm/mach-shmobile/Kconfig | |||
@@ -57,6 +57,7 @@ config ARCH_R7S72100 | |||
57 | select PM | 57 | select PM |
58 | select PM_GENERIC_DOMAINS | 58 | select PM_GENERIC_DOMAINS |
59 | select SYS_SUPPORTS_SH_MTU2 | 59 | select SYS_SUPPORTS_SH_MTU2 |
60 | select RENESAS_OSTM | ||
60 | 61 | ||
61 | config ARCH_R8A73A4 | 62 | config ARCH_R8A73A4 |
62 | bool "R-Mobile APE6 (R8A73A40)" | 63 | bool "R-Mobile APE6 (R8A73A40)" |
diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index eaa5bbe3fa87..b4b34004a21e 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h | |||
@@ -29,41 +29,29 @@ | |||
29 | 29 | ||
30 | #include <clocksource/arm_arch_timer.h> | 30 | #include <clocksource/arm_arch_timer.h> |
31 | 31 | ||
32 | #if IS_ENABLED(CONFIG_FSL_ERRATUM_A008585) | 32 | #if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND) |
33 | extern struct static_key_false arch_timer_read_ool_enabled; | 33 | extern struct static_key_false arch_timer_read_ool_enabled; |
34 | #define needs_fsl_a008585_workaround() \ | 34 | #define needs_unstable_timer_counter_workaround() \ |
35 | static_branch_unlikely(&arch_timer_read_ool_enabled) | 35 | static_branch_unlikely(&arch_timer_read_ool_enabled) |
36 | #else | 36 | #else |
37 | #define needs_fsl_a008585_workaround() false | 37 | #define needs_unstable_timer_counter_workaround() false |
38 | #endif | 38 | #endif |
39 | 39 | ||
40 | u32 __fsl_a008585_read_cntp_tval_el0(void); | ||
41 | u32 __fsl_a008585_read_cntv_tval_el0(void); | ||
42 | u64 __fsl_a008585_read_cntvct_el0(void); | ||
43 | 40 | ||
44 | /* | 41 | struct arch_timer_erratum_workaround { |
45 | * The number of retries is an arbitrary value well beyond the highest number | 42 | const char *id; /* Indicate the Erratum ID */ |
46 | * of iterations the loop has been observed to take. | 43 | u32 (*read_cntp_tval_el0)(void); |
47 | */ | 44 | u32 (*read_cntv_tval_el0)(void); |
48 | #define __fsl_a008585_read_reg(reg) ({ \ | 45 | u64 (*read_cntvct_el0)(void); |
49 | u64 _old, _new; \ | 46 | }; |
50 | int _retries = 200; \ | 47 | |
51 | \ | 48 | extern const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround; |
52 | do { \ | ||
53 | _old = read_sysreg(reg); \ | ||
54 | _new = read_sysreg(reg); \ | ||
55 | _retries--; \ | ||
56 | } while (unlikely(_old != _new) && _retries); \ | ||
57 | \ | ||
58 | WARN_ON_ONCE(!_retries); \ | ||
59 | _new; \ | ||
60 | }) | ||
61 | 49 | ||
62 | #define arch_timer_reg_read_stable(reg) \ | 50 | #define arch_timer_reg_read_stable(reg) \ |
63 | ({ \ | 51 | ({ \ |
64 | u64 _val; \ | 52 | u64 _val; \ |
65 | if (needs_fsl_a008585_workaround()) \ | 53 | if (needs_unstable_timer_counter_workaround()) \ |
66 | _val = __fsl_a008585_read_##reg(); \ | 54 | _val = timer_unstable_counter_workaround->read_##reg();\ |
67 | else \ | 55 | else \ |
68 | _val = read_sysreg(reg); \ | 56 | _val = read_sysreg(reg); \ |
69 | _val; \ | 57 | _val; \ |
diff --git a/arch/tile/include/asm/Kbuild b/arch/tile/include/asm/Kbuild index 2d1f5638974c..20f2ba6d79be 100644 --- a/arch/tile/include/asm/Kbuild +++ b/arch/tile/include/asm/Kbuild | |||
@@ -5,7 +5,6 @@ generic-y += bug.h | |||
5 | generic-y += bugs.h | 5 | generic-y += bugs.h |
6 | generic-y += clkdev.h | 6 | generic-y += clkdev.h |
7 | generic-y += cputime.h | 7 | generic-y += cputime.h |
8 | generic-y += div64.h | ||
9 | generic-y += emergency-restart.h | 8 | generic-y += emergency-restart.h |
10 | generic-y += errno.h | 9 | generic-y += errno.h |
11 | generic-y += exec.h | 10 | generic-y += exec.h |
diff --git a/arch/tile/include/asm/div64.h b/arch/tile/include/asm/div64.h new file mode 100644 index 000000000000..9f765cdf09a5 --- /dev/null +++ b/arch/tile/include/asm/div64.h | |||
@@ -0,0 +1,16 @@ | |||
1 | #ifndef _ASM_TILE_DIV64_H | ||
2 | #define _ASM_TILE_DIV64_H | ||
3 | |||
4 | #include <linux/types.h> | ||
5 | |||
6 | #ifdef __tilegx__ | ||
7 | static inline u64 mul_u32_u32(u32 a, u32 b) | ||
8 | { | ||
9 | return __insn_mul_lu_lu(a, b); | ||
10 | } | ||
11 | #define mul_u32_u32 mul_u32_u32 | ||
12 | #endif | ||
13 | |||
14 | #include <asm-generic/div64.h> | ||
15 | |||
16 | #endif /* _ASM_TILE_DIV64_H */ | ||
diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h index ced283ac79df..af95c47d5c9e 100644 --- a/arch/x86/include/asm/div64.h +++ b/arch/x86/include/asm/div64.h | |||
@@ -59,6 +59,17 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) | |||
59 | } | 59 | } |
60 | #define div_u64_rem div_u64_rem | 60 | #define div_u64_rem div_u64_rem |
61 | 61 | ||
62 | static inline u64 mul_u32_u32(u32 a, u32 b) | ||
63 | { | ||
64 | u32 high, low; | ||
65 | |||
66 | asm ("mull %[b]" : "=a" (low), "=d" (high) | ||
67 | : [a] "a" (a), [b] "rm" (b) ); | ||
68 | |||
69 | return low | ((u64)high) << 32; | ||
70 | } | ||
71 | #define mul_u32_u32 mul_u32_u32 | ||
72 | |||
62 | #else | 73 | #else |
63 | # include <asm-generic/div64.h> | 74 | # include <asm-generic/div64.h> |
64 | #endif /* CONFIG_X86_32 */ | 75 | #endif /* CONFIG_X86_32 */ |
diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c index 073d1f1a620b..a8e91ae89fb3 100644 --- a/arch/x86/lib/delay.c +++ b/arch/x86/lib/delay.c | |||
@@ -156,13 +156,13 @@ EXPORT_SYMBOL(__delay); | |||
156 | 156 | ||
157 | inline void __const_udelay(unsigned long xloops) | 157 | inline void __const_udelay(unsigned long xloops) |
158 | { | 158 | { |
159 | unsigned long lpj = this_cpu_read(cpu_info.loops_per_jiffy) ? : loops_per_jiffy; | ||
159 | int d0; | 160 | int d0; |
160 | 161 | ||
161 | xloops *= 4; | 162 | xloops *= 4; |
162 | asm("mull %%edx" | 163 | asm("mull %%edx" |
163 | :"=d" (xloops), "=&a" (d0) | 164 | :"=d" (xloops), "=&a" (d0) |
164 | :"1" (xloops), "0" | 165 | :"1" (xloops), "0" (lpj * (HZ / 4))); |
165 | (this_cpu_read(cpu_info.loops_per_jiffy) * (HZ/4))); | ||
166 | 166 | ||
167 | __delay(++xloops); | 167 | __delay(++xloops); |
168 | } | 168 | } |
diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig index 4866f7aa32e6..3356ab821624 100644 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig | |||
@@ -5,6 +5,10 @@ config CLKSRC_OF | |||
5 | bool | 5 | bool |
6 | select CLKSRC_PROBE | 6 | select CLKSRC_PROBE |
7 | 7 | ||
8 | config CLKEVT_OF | ||
9 | bool | ||
10 | select CLKEVT_PROBE | ||
11 | |||
8 | config CLKSRC_ACPI | 12 | config CLKSRC_ACPI |
9 | bool | 13 | bool |
10 | select CLKSRC_PROBE | 14 | select CLKSRC_PROBE |
@@ -12,6 +16,9 @@ config CLKSRC_ACPI | |||
12 | config CLKSRC_PROBE | 16 | config CLKSRC_PROBE |
13 | bool | 17 | bool |
14 | 18 | ||
19 | config CLKEVT_PROBE | ||
20 | bool | ||
21 | |||
15 | config CLKSRC_I8253 | 22 | config CLKSRC_I8253 |
16 | bool | 23 | bool |
17 | 24 | ||
@@ -60,6 +67,16 @@ config DW_APB_TIMER_OF | |||
60 | select DW_APB_TIMER | 67 | select DW_APB_TIMER |
61 | select CLKSRC_OF | 68 | select CLKSRC_OF |
62 | 69 | ||
70 | config GEMINI_TIMER | ||
71 | bool "Cortina Gemini timer driver" if COMPILE_TEST | ||
72 | depends on GENERIC_CLOCKEVENTS | ||
73 | depends on HAS_IOMEM | ||
74 | select CLKSRC_MMIO | ||
75 | select CLKSRC_OF | ||
76 | select MFD_SYSCON | ||
77 | help | ||
78 | Enables support for the Gemini timer | ||
79 | |||
63 | config ROCKCHIP_TIMER | 80 | config ROCKCHIP_TIMER |
64 | bool "Rockchip timer driver" if COMPILE_TEST | 81 | bool "Rockchip timer driver" if COMPILE_TEST |
65 | depends on ARM || ARM64 | 82 | depends on ARM || ARM64 |
@@ -325,16 +342,30 @@ config ARM_ARCH_TIMER_EVTSTREAM | |||
325 | This must be disabled for hardware validation purposes to detect any | 342 | This must be disabled for hardware validation purposes to detect any |
326 | hardware anomalies of missing events. | 343 | hardware anomalies of missing events. |
327 | 344 | ||
345 | config ARM_ARCH_TIMER_OOL_WORKAROUND | ||
346 | bool | ||
347 | |||
328 | config FSL_ERRATUM_A008585 | 348 | config FSL_ERRATUM_A008585 |
329 | bool "Workaround for Freescale/NXP Erratum A-008585" | 349 | bool "Workaround for Freescale/NXP Erratum A-008585" |
330 | default y | 350 | default y |
331 | depends on ARM_ARCH_TIMER && ARM64 | 351 | depends on ARM_ARCH_TIMER && ARM64 |
352 | select ARM_ARCH_TIMER_OOL_WORKAROUND | ||
332 | help | 353 | help |
333 | This option enables a workaround for Freescale/NXP Erratum | 354 | This option enables a workaround for Freescale/NXP Erratum |
334 | A-008585 ("ARM generic timer may contain an erroneous | 355 | A-008585 ("ARM generic timer may contain an erroneous |
335 | value"). The workaround will only be active if the | 356 | value"). The workaround will only be active if the |
336 | fsl,erratum-a008585 property is found in the timer node. | 357 | fsl,erratum-a008585 property is found in the timer node. |
337 | 358 | ||
359 | config HISILICON_ERRATUM_161010101 | ||
360 | bool "Workaround for Hisilicon Erratum 161010101" | ||
361 | default y | ||
362 | select ARM_ARCH_TIMER_OOL_WORKAROUND | ||
363 | depends on ARM_ARCH_TIMER && ARM64 | ||
364 | help | ||
365 | This option enables a workaround for Hisilicon Erratum | ||
366 | 161010101. The workaround will be active if the hisilicon,erratum-161010101 | ||
367 | property is found in the timer node. | ||
368 | |||
338 | config ARM_GLOBAL_TIMER | 369 | config ARM_GLOBAL_TIMER |
339 | bool "Support for the ARM global timer" if COMPILE_TEST | 370 | bool "Support for the ARM global timer" if COMPILE_TEST |
340 | select CLKSRC_OF if OF | 371 | select CLKSRC_OF if OF |
@@ -467,6 +498,13 @@ config SH_TIMER_MTU2 | |||
467 | Timer Pulse Unit 2 (MTU2) hardware available on SoCs from Renesas. | 498 | Timer Pulse Unit 2 (MTU2) hardware available on SoCs from Renesas. |
468 | This hardware comes with 16 bit-timer registers. | 499 | This hardware comes with 16 bit-timer registers. |
469 | 500 | ||
501 | config RENESAS_OSTM | ||
502 | bool "Renesas OSTM timer driver" if COMPILE_TEST | ||
503 | depends on GENERIC_CLOCKEVENTS | ||
504 | select CLKSRC_MMIO | ||
505 | help | ||
506 | Enables the support for the Renesas OSTM. | ||
507 | |||
470 | config SH_TIMER_TMU | 508 | config SH_TIMER_TMU |
471 | bool "Renesas TMU timer driver" if COMPILE_TEST | 509 | bool "Renesas TMU timer driver" if COMPILE_TEST |
472 | depends on GENERIC_CLOCKEVENTS | 510 | depends on GENERIC_CLOCKEVENTS |
diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile index a14111e1f087..d227d1314f14 100644 --- a/drivers/clocksource/Makefile +++ b/drivers/clocksource/Makefile | |||
@@ -1,4 +1,5 @@ | |||
1 | obj-$(CONFIG_CLKSRC_PROBE) += clksrc-probe.o | 1 | obj-$(CONFIG_CLKSRC_PROBE) += clksrc-probe.o |
2 | obj-$(CONFIG_CLKEVT_PROBE) += clkevt-probe.o | ||
2 | obj-$(CONFIG_ATMEL_PIT) += timer-atmel-pit.o | 3 | obj-$(CONFIG_ATMEL_PIT) += timer-atmel-pit.o |
3 | obj-$(CONFIG_ATMEL_ST) += timer-atmel-st.o | 4 | obj-$(CONFIG_ATMEL_ST) += timer-atmel-st.o |
4 | obj-$(CONFIG_ATMEL_TCB_CLKSRC) += tcb_clksrc.o | 5 | obj-$(CONFIG_ATMEL_TCB_CLKSRC) += tcb_clksrc.o |
@@ -8,6 +9,7 @@ obj-$(CONFIG_CS5535_CLOCK_EVENT_SRC) += cs5535-clockevt.o | |||
8 | obj-$(CONFIG_CLKSRC_JCORE_PIT) += jcore-pit.o | 9 | obj-$(CONFIG_CLKSRC_JCORE_PIT) += jcore-pit.o |
9 | obj-$(CONFIG_SH_TIMER_CMT) += sh_cmt.o | 10 | obj-$(CONFIG_SH_TIMER_CMT) += sh_cmt.o |
10 | obj-$(CONFIG_SH_TIMER_MTU2) += sh_mtu2.o | 11 | obj-$(CONFIG_SH_TIMER_MTU2) += sh_mtu2.o |
12 | obj-$(CONFIG_RENESAS_OSTM) += renesas-ostm.o | ||
11 | obj-$(CONFIG_SH_TIMER_TMU) += sh_tmu.o | 13 | obj-$(CONFIG_SH_TIMER_TMU) += sh_tmu.o |
12 | obj-$(CONFIG_EM_TIMER_STI) += em_sti.o | 14 | obj-$(CONFIG_EM_TIMER_STI) += em_sti.o |
13 | obj-$(CONFIG_CLKBLD_I8253) += i8253.o | 15 | obj-$(CONFIG_CLKBLD_I8253) += i8253.o |
@@ -15,6 +17,7 @@ obj-$(CONFIG_CLKSRC_MMIO) += mmio.o | |||
15 | obj-$(CONFIG_DIGICOLOR_TIMER) += timer-digicolor.o | 17 | obj-$(CONFIG_DIGICOLOR_TIMER) += timer-digicolor.o |
16 | obj-$(CONFIG_DW_APB_TIMER) += dw_apb_timer.o | 18 | obj-$(CONFIG_DW_APB_TIMER) += dw_apb_timer.o |
17 | obj-$(CONFIG_DW_APB_TIMER_OF) += dw_apb_timer_of.o | 19 | obj-$(CONFIG_DW_APB_TIMER_OF) += dw_apb_timer_of.o |
20 | obj-$(CONFIG_GEMINI_TIMER) += timer-gemini.o | ||
18 | obj-$(CONFIG_ROCKCHIP_TIMER) += rockchip_timer.o | 21 | obj-$(CONFIG_ROCKCHIP_TIMER) += rockchip_timer.o |
19 | obj-$(CONFIG_CLKSRC_NOMADIK_MTU) += nomadik-mtu.o | 22 | obj-$(CONFIG_CLKSRC_NOMADIK_MTU) += nomadik-mtu.o |
20 | obj-$(CONFIG_CLKSRC_DBX500_PRCMU) += clksrc-dbx500-prcmu.o | 23 | obj-$(CONFIG_CLKSRC_DBX500_PRCMU) += clksrc-dbx500-prcmu.o |
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 4c8c3fb2e8b2..93aa1364376a 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c | |||
@@ -96,41 +96,107 @@ early_param("clocksource.arm_arch_timer.evtstrm", early_evtstrm_cfg); | |||
96 | */ | 96 | */ |
97 | 97 | ||
98 | #ifdef CONFIG_FSL_ERRATUM_A008585 | 98 | #ifdef CONFIG_FSL_ERRATUM_A008585 |
99 | DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); | 99 | /* |
100 | EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); | 100 | * The number of retries is an arbitrary value well beyond the highest number |
101 | 101 | * of iterations the loop has been observed to take. | |
102 | static int fsl_a008585_enable = -1; | 102 | */ |
103 | 103 | #define __fsl_a008585_read_reg(reg) ({ \ | |
104 | static int __init early_fsl_a008585_cfg(char *buf) | 104 | u64 _old, _new; \ |
105 | int _retries = 200; \ | ||
106 | \ | ||
107 | do { \ | ||
108 | _old = read_sysreg(reg); \ | ||
109 | _new = read_sysreg(reg); \ | ||
110 | _retries--; \ | ||
111 | } while (unlikely(_old != _new) && _retries); \ | ||
112 | \ | ||
113 | WARN_ON_ONCE(!_retries); \ | ||
114 | _new; \ | ||
115 | }) | ||
116 | |||
117 | static u32 notrace fsl_a008585_read_cntp_tval_el0(void) | ||
105 | { | 118 | { |
106 | int ret; | 119 | return __fsl_a008585_read_reg(cntp_tval_el0); |
107 | bool val; | 120 | } |
108 | 121 | ||
109 | ret = strtobool(buf, &val); | 122 | static u32 notrace fsl_a008585_read_cntv_tval_el0(void) |
110 | if (ret) | 123 | { |
111 | return ret; | 124 | return __fsl_a008585_read_reg(cntv_tval_el0); |
125 | } | ||
112 | 126 | ||
113 | fsl_a008585_enable = val; | 127 | static u64 notrace fsl_a008585_read_cntvct_el0(void) |
114 | return 0; | 128 | { |
129 | return __fsl_a008585_read_reg(cntvct_el0); | ||
115 | } | 130 | } |
116 | early_param("clocksource.arm_arch_timer.fsl-a008585", early_fsl_a008585_cfg); | 131 | #endif |
117 | 132 | ||
118 | u32 __fsl_a008585_read_cntp_tval_el0(void) | 133 | #ifdef CONFIG_HISILICON_ERRATUM_161010101 |
134 | /* | ||
135 | * Verify whether the value of the second read is larger than the first by | ||
136 | * less than 32 is the only way to confirm the value is correct, so clear the | ||
137 | * lower 5 bits to check whether the difference is greater than 32 or not. | ||
138 | * Theoretically the erratum should not occur more than twice in succession | ||
139 | * when reading the system counter, but it is possible that some interrupts | ||
140 | * may lead to more than twice read errors, triggering the warning, so setting | ||
141 | * the number of retries far beyond the number of iterations the loop has been | ||
142 | * observed to take. | ||
143 | */ | ||
144 | #define __hisi_161010101_read_reg(reg) ({ \ | ||
145 | u64 _old, _new; \ | ||
146 | int _retries = 50; \ | ||
147 | \ | ||
148 | do { \ | ||
149 | _old = read_sysreg(reg); \ | ||
150 | _new = read_sysreg(reg); \ | ||
151 | _retries--; \ | ||
152 | } while (unlikely((_new - _old) >> 5) && _retries); \ | ||
153 | \ | ||
154 | WARN_ON_ONCE(!_retries); \ | ||
155 | _new; \ | ||
156 | }) | ||
157 | |||
158 | static u32 notrace hisi_161010101_read_cntp_tval_el0(void) | ||
119 | { | 159 | { |
120 | return __fsl_a008585_read_reg(cntp_tval_el0); | 160 | return __hisi_161010101_read_reg(cntp_tval_el0); |
121 | } | 161 | } |
122 | 162 | ||
123 | u32 __fsl_a008585_read_cntv_tval_el0(void) | 163 | static u32 notrace hisi_161010101_read_cntv_tval_el0(void) |
124 | { | 164 | { |
125 | return __fsl_a008585_read_reg(cntv_tval_el0); | 165 | return __hisi_161010101_read_reg(cntv_tval_el0); |
126 | } | 166 | } |
127 | 167 | ||
128 | u64 __fsl_a008585_read_cntvct_el0(void) | 168 | static u64 notrace hisi_161010101_read_cntvct_el0(void) |
129 | { | 169 | { |
130 | return __fsl_a008585_read_reg(cntvct_el0); | 170 | return __hisi_161010101_read_reg(cntvct_el0); |
131 | } | 171 | } |
132 | EXPORT_SYMBOL(__fsl_a008585_read_cntvct_el0); | 172 | #endif |
133 | #endif /* CONFIG_FSL_ERRATUM_A008585 */ | 173 | |
174 | #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND | ||
175 | const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL; | ||
176 | EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); | ||
177 | |||
178 | DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); | ||
179 | EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); | ||
180 | |||
181 | static const struct arch_timer_erratum_workaround ool_workarounds[] = { | ||
182 | #ifdef CONFIG_FSL_ERRATUM_A008585 | ||
183 | { | ||
184 | .id = "fsl,erratum-a008585", | ||
185 | .read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0, | ||
186 | .read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0, | ||
187 | .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, | ||
188 | }, | ||
189 | #endif | ||
190 | #ifdef CONFIG_HISILICON_ERRATUM_161010101 | ||
191 | { | ||
192 | .id = "hisilicon,erratum-161010101", | ||
193 | .read_cntp_tval_el0 = hisi_161010101_read_cntp_tval_el0, | ||
194 | .read_cntv_tval_el0 = hisi_161010101_read_cntv_tval_el0, | ||
195 | .read_cntvct_el0 = hisi_161010101_read_cntvct_el0, | ||
196 | }, | ||
197 | #endif | ||
198 | }; | ||
199 | #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ | ||
134 | 200 | ||
135 | static __always_inline | 201 | static __always_inline |
136 | void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val, | 202 | void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val, |
@@ -281,8 +347,8 @@ static __always_inline void set_next_event(const int access, unsigned long evt, | |||
281 | arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); | 347 | arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); |
282 | } | 348 | } |
283 | 349 | ||
284 | #ifdef CONFIG_FSL_ERRATUM_A008585 | 350 | #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND |
285 | static __always_inline void fsl_a008585_set_next_event(const int access, | 351 | static __always_inline void erratum_set_next_event_generic(const int access, |
286 | unsigned long evt, struct clock_event_device *clk) | 352 | unsigned long evt, struct clock_event_device *clk) |
287 | { | 353 | { |
288 | unsigned long ctrl; | 354 | unsigned long ctrl; |
@@ -300,20 +366,20 @@ static __always_inline void fsl_a008585_set_next_event(const int access, | |||
300 | arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); | 366 | arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); |
301 | } | 367 | } |
302 | 368 | ||
303 | static int fsl_a008585_set_next_event_virt(unsigned long evt, | 369 | static int erratum_set_next_event_virt(unsigned long evt, |
304 | struct clock_event_device *clk) | 370 | struct clock_event_device *clk) |
305 | { | 371 | { |
306 | fsl_a008585_set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk); | 372 | erratum_set_next_event_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk); |
307 | return 0; | 373 | return 0; |
308 | } | 374 | } |
309 | 375 | ||
310 | static int fsl_a008585_set_next_event_phys(unsigned long evt, | 376 | static int erratum_set_next_event_phys(unsigned long evt, |
311 | struct clock_event_device *clk) | 377 | struct clock_event_device *clk) |
312 | { | 378 | { |
313 | fsl_a008585_set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk); | 379 | erratum_set_next_event_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk); |
314 | return 0; | 380 | return 0; |
315 | } | 381 | } |
316 | #endif /* CONFIG_FSL_ERRATUM_A008585 */ | 382 | #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ |
317 | 383 | ||
318 | static int arch_timer_set_next_event_virt(unsigned long evt, | 384 | static int arch_timer_set_next_event_virt(unsigned long evt, |
319 | struct clock_event_device *clk) | 385 | struct clock_event_device *clk) |
@@ -343,16 +409,16 @@ static int arch_timer_set_next_event_phys_mem(unsigned long evt, | |||
343 | return 0; | 409 | return 0; |
344 | } | 410 | } |
345 | 411 | ||
346 | static void fsl_a008585_set_sne(struct clock_event_device *clk) | 412 | static void erratum_workaround_set_sne(struct clock_event_device *clk) |
347 | { | 413 | { |
348 | #ifdef CONFIG_FSL_ERRATUM_A008585 | 414 | #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND |
349 | if (!static_branch_unlikely(&arch_timer_read_ool_enabled)) | 415 | if (!static_branch_unlikely(&arch_timer_read_ool_enabled)) |
350 | return; | 416 | return; |
351 | 417 | ||
352 | if (arch_timer_uses_ppi == VIRT_PPI) | 418 | if (arch_timer_uses_ppi == VIRT_PPI) |
353 | clk->set_next_event = fsl_a008585_set_next_event_virt; | 419 | clk->set_next_event = erratum_set_next_event_virt; |
354 | else | 420 | else |
355 | clk->set_next_event = fsl_a008585_set_next_event_phys; | 421 | clk->set_next_event = erratum_set_next_event_phys; |
356 | #endif | 422 | #endif |
357 | } | 423 | } |
358 | 424 | ||
@@ -385,7 +451,7 @@ static void __arch_timer_setup(unsigned type, | |||
385 | BUG(); | 451 | BUG(); |
386 | } | 452 | } |
387 | 453 | ||
388 | fsl_a008585_set_sne(clk); | 454 | erratum_workaround_set_sne(clk); |
389 | } else { | 455 | } else { |
390 | clk->features |= CLOCK_EVT_FEAT_DYNIRQ; | 456 | clk->features |= CLOCK_EVT_FEAT_DYNIRQ; |
391 | clk->name = "arch_mem_timer"; | 457 | clk->name = "arch_mem_timer"; |
@@ -580,7 +646,7 @@ static struct clocksource clocksource_counter = { | |||
580 | .flags = CLOCK_SOURCE_IS_CONTINUOUS, | 646 | .flags = CLOCK_SOURCE_IS_CONTINUOUS, |
581 | }; | 647 | }; |
582 | 648 | ||
583 | static struct cyclecounter cyclecounter = { | 649 | static struct cyclecounter cyclecounter __ro_after_init = { |
584 | .read = arch_counter_read_cc, | 650 | .read = arch_counter_read_cc, |
585 | .mask = CLOCKSOURCE_MASK(56), | 651 | .mask = CLOCKSOURCE_MASK(56), |
586 | }; | 652 | }; |
@@ -605,7 +671,7 @@ static void __init arch_counter_register(unsigned type) | |||
605 | 671 | ||
606 | clocksource_counter.archdata.vdso_direct = true; | 672 | clocksource_counter.archdata.vdso_direct = true; |
607 | 673 | ||
608 | #ifdef CONFIG_FSL_ERRATUM_A008585 | 674 | #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND |
609 | /* | 675 | /* |
610 | * Don't use the vdso fastpath if errata require using | 676 | * Don't use the vdso fastpath if errata require using |
611 | * the out-of-line counter accessor. | 677 | * the out-of-line counter accessor. |
@@ -893,12 +959,15 @@ static int __init arch_timer_of_init(struct device_node *np) | |||
893 | 959 | ||
894 | arch_timer_c3stop = !of_property_read_bool(np, "always-on"); | 960 | arch_timer_c3stop = !of_property_read_bool(np, "always-on"); |
895 | 961 | ||
896 | #ifdef CONFIG_FSL_ERRATUM_A008585 | 962 | #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND |
897 | if (fsl_a008585_enable < 0) | 963 | for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) { |
898 | fsl_a008585_enable = of_property_read_bool(np, "fsl,erratum-a008585"); | 964 | if (of_property_read_bool(np, ool_workarounds[i].id)) { |
899 | if (fsl_a008585_enable) { | 965 | timer_unstable_counter_workaround = &ool_workarounds[i]; |
900 | static_branch_enable(&arch_timer_read_ool_enabled); | 966 | static_branch_enable(&arch_timer_read_ool_enabled); |
901 | pr_info("Enabling workaround for FSL erratum A-008585\n"); | 967 | pr_info("arch_timer: Enabling workaround for %s\n", |
968 | timer_unstable_counter_workaround->id); | ||
969 | break; | ||
970 | } | ||
902 | } | 971 | } |
903 | #endif | 972 | #endif |
904 | 973 | ||
diff --git a/drivers/clocksource/clkevt-probe.c b/drivers/clocksource/clkevt-probe.c new file mode 100644 index 000000000000..8c30fec86094 --- /dev/null +++ b/drivers/clocksource/clkevt-probe.c | |||
@@ -0,0 +1,56 @@ | |||
1 | /* | ||
2 | * Copyright (c) 2016, Linaro Ltd. All rights reserved. | ||
3 | * Daniel Lezcano <daniel.lezcano@linaro.org> | ||
4 | * | ||
5 | * This program is free software; you can redistribute it and/or modify it | ||
6 | * under the terms and conditions of the GNU General Public License, | ||
7 | * version 2, as published by the Free Software Foundation. | ||
8 | * | ||
9 | * This program is distributed in the hope it will be useful, but WITHOUT | ||
10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or | ||
11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for | ||
12 | * more details. | ||
13 | * | ||
14 | * You should have received a copy of the GNU General Public License | ||
15 | * along with this program. If not, see <http://www.gnu.org/licenses/>. | ||
16 | */ | ||
17 | |||
18 | #include <linux/init.h> | ||
19 | #include <linux/of.h> | ||
20 | #include <linux/clockchip.h> | ||
21 | |||
22 | extern struct of_device_id __clkevt_of_table[]; | ||
23 | |||
24 | static const struct of_device_id __clkevt_of_table_sentinel | ||
25 | __used __section(__clkevt_of_table_end); | ||
26 | |||
27 | int __init clockevent_probe(void) | ||
28 | { | ||
29 | struct device_node *np; | ||
30 | const struct of_device_id *match; | ||
31 | of_init_fn_1_ret init_func; | ||
32 | int ret, clockevents = 0; | ||
33 | |||
34 | for_each_matching_node_and_match(np, __clkevt_of_table, &match) { | ||
35 | if (!of_device_is_available(np)) | ||
36 | continue; | ||
37 | |||
38 | init_func = match->data; | ||
39 | |||
40 | ret = init_func(np); | ||
41 | if (ret) { | ||
42 | pr_warn("Failed to initialize '%s' (%d)\n", | ||
43 | np->name, ret); | ||
44 | continue; | ||
45 | } | ||
46 | |||
47 | clockevents++; | ||
48 | } | ||
49 | |||
50 | if (!clockevents) { | ||
51 | pr_crit("%s: no matching clockevent found\n", __func__); | ||
52 | return -ENODEV; | ||
53 | } | ||
54 | |||
55 | return 0; | ||
56 | } | ||
diff --git a/drivers/clocksource/renesas-ostm.c b/drivers/clocksource/renesas-ostm.c new file mode 100644 index 000000000000..c76f57668fb2 --- /dev/null +++ b/drivers/clocksource/renesas-ostm.c | |||
@@ -0,0 +1,265 @@ | |||
1 | /* | ||
2 | * Renesas Timer Support - OSTM | ||
3 | * | ||
4 | * Copyright (C) 2017 Renesas Electronics America, Inc. | ||
5 | * Copyright (C) 2017 Chris Brandt | ||
6 | * | ||
7 | * This program is free software; you can redistribute it and/or modify | ||
8 | * it under the terms of the GNU General Public License as published by | ||
9 | * the Free Software Foundation; either version 2 of the License | ||
10 | * | ||
11 | * This program is distributed in the hope that it will be useful, | ||
12 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
13 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
14 | * GNU General Public License for more details. | ||
15 | * | ||
16 | */ | ||
17 | |||
18 | #include <linux/of_address.h> | ||
19 | #include <linux/of_irq.h> | ||
20 | #include <linux/clk.h> | ||
21 | #include <linux/clockchips.h> | ||
22 | #include <linux/interrupt.h> | ||
23 | #include <linux/sched_clock.h> | ||
24 | #include <linux/slab.h> | ||
25 | |||
26 | /* | ||
27 | * The OSTM contains independent channels. | ||
28 | * The first OSTM channel probed will be set up as a free running | ||
29 | * clocksource. Additionally we will use this clocksource for the system | ||
30 | * schedule timer sched_clock(). | ||
31 | * | ||
32 | * The second (or more) channel probed will be set up as an interrupt | ||
33 | * driven clock event. | ||
34 | */ | ||
35 | |||
36 | struct ostm_device { | ||
37 | void __iomem *base; | ||
38 | unsigned long ticks_per_jiffy; | ||
39 | struct clock_event_device ced; | ||
40 | }; | ||
41 | |||
42 | static void __iomem *system_clock; /* For sched_clock() */ | ||
43 | |||
44 | /* OSTM REGISTERS */ | ||
45 | #define OSTM_CMP 0x000 /* RW,32 */ | ||
46 | #define OSTM_CNT 0x004 /* R,32 */ | ||
47 | #define OSTM_TE 0x010 /* R,8 */ | ||
48 | #define OSTM_TS 0x014 /* W,8 */ | ||
49 | #define OSTM_TT 0x018 /* W,8 */ | ||
50 | #define OSTM_CTL 0x020 /* RW,8 */ | ||
51 | |||
52 | #define TE 0x01 | ||
53 | #define TS 0x01 | ||
54 | #define TT 0x01 | ||
55 | #define CTL_PERIODIC 0x00 | ||
56 | #define CTL_ONESHOT 0x02 | ||
57 | #define CTL_FREERUN 0x02 | ||
58 | |||
59 | static struct ostm_device *ced_to_ostm(struct clock_event_device *ced) | ||
60 | { | ||
61 | return container_of(ced, struct ostm_device, ced); | ||
62 | } | ||
63 | |||
64 | static void ostm_timer_stop(struct ostm_device *ostm) | ||
65 | { | ||
66 | if (readb(ostm->base + OSTM_TE) & TE) { | ||
67 | writeb(TT, ostm->base + OSTM_TT); | ||
68 | |||
69 | /* | ||
70 | * Read back the register simply to confirm the write operation | ||
71 | * has completed since I/O writes can sometimes get queued by | ||
72 | * the bus architecture. | ||
73 | */ | ||
74 | while (readb(ostm->base + OSTM_TE) & TE) | ||
75 | ; | ||
76 | } | ||
77 | } | ||
78 | |||
79 | static int __init ostm_init_clksrc(struct ostm_device *ostm, unsigned long rate) | ||
80 | { | ||
81 | /* | ||
82 | * irq not used (clock sources don't use interrupts) | ||
83 | */ | ||
84 | |||
85 | ostm_timer_stop(ostm); | ||
86 | |||
87 | writel(0, ostm->base + OSTM_CMP); | ||
88 | writeb(CTL_FREERUN, ostm->base + OSTM_CTL); | ||
89 | writeb(TS, ostm->base + OSTM_TS); | ||
90 | |||
91 | return clocksource_mmio_init(ostm->base + OSTM_CNT, | ||
92 | "ostm", rate, | ||
93 | 300, 32, clocksource_mmio_readl_up); | ||
94 | } | ||
95 | |||
96 | static u64 notrace ostm_read_sched_clock(void) | ||
97 | { | ||
98 | return readl(system_clock); | ||
99 | } | ||
100 | |||
101 | static void __init ostm_init_sched_clock(struct ostm_device *ostm, | ||
102 | unsigned long rate) | ||
103 | { | ||
104 | system_clock = ostm->base + OSTM_CNT; | ||
105 | sched_clock_register(ostm_read_sched_clock, 32, rate); | ||
106 | } | ||
107 | |||
108 | static int ostm_clock_event_next(unsigned long delta, | ||
109 | struct clock_event_device *ced) | ||
110 | { | ||
111 | struct ostm_device *ostm = ced_to_ostm(ced); | ||
112 | |||
113 | ostm_timer_stop(ostm); | ||
114 | |||
115 | writel(delta, ostm->base + OSTM_CMP); | ||
116 | writeb(CTL_ONESHOT, ostm->base + OSTM_CTL); | ||
117 | writeb(TS, ostm->base + OSTM_TS); | ||
118 | |||
119 | return 0; | ||
120 | } | ||
121 | |||
122 | static int ostm_shutdown(struct clock_event_device *ced) | ||
123 | { | ||
124 | struct ostm_device *ostm = ced_to_ostm(ced); | ||
125 | |||
126 | ostm_timer_stop(ostm); | ||
127 | |||
128 | return 0; | ||
129 | } | ||
130 | static int ostm_set_periodic(struct clock_event_device *ced) | ||
131 | { | ||
132 | struct ostm_device *ostm = ced_to_ostm(ced); | ||
133 | |||
134 | if (clockevent_state_oneshot(ced) || clockevent_state_periodic(ced)) | ||
135 | ostm_timer_stop(ostm); | ||
136 | |||
137 | writel(ostm->ticks_per_jiffy - 1, ostm->base + OSTM_CMP); | ||
138 | writeb(CTL_PERIODIC, ostm->base + OSTM_CTL); | ||
139 | writeb(TS, ostm->base + OSTM_TS); | ||
140 | |||
141 | return 0; | ||
142 | } | ||
143 | |||
144 | static int ostm_set_oneshot(struct clock_event_device *ced) | ||
145 | { | ||
146 | struct ostm_device *ostm = ced_to_ostm(ced); | ||
147 | |||
148 | ostm_timer_stop(ostm); | ||
149 | |||
150 | return 0; | ||
151 | } | ||
152 | |||
153 | static irqreturn_t ostm_timer_interrupt(int irq, void *dev_id) | ||
154 | { | ||
155 | struct ostm_device *ostm = dev_id; | ||
156 | |||
157 | if (clockevent_state_oneshot(&ostm->ced)) | ||
158 | ostm_timer_stop(ostm); | ||
159 | |||
160 | /* notify clockevent layer */ | ||
161 | if (ostm->ced.event_handler) | ||
162 | ostm->ced.event_handler(&ostm->ced); | ||
163 | |||
164 | return IRQ_HANDLED; | ||
165 | } | ||
166 | |||
167 | static int __init ostm_init_clkevt(struct ostm_device *ostm, int irq, | ||
168 | unsigned long rate) | ||
169 | { | ||
170 | struct clock_event_device *ced = &ostm->ced; | ||
171 | int ret = -ENXIO; | ||
172 | |||
173 | ret = request_irq(irq, ostm_timer_interrupt, | ||
174 | IRQF_TIMER | IRQF_IRQPOLL, | ||
175 | "ostm", ostm); | ||
176 | if (ret) { | ||
177 | pr_err("ostm: failed to request irq\n"); | ||
178 | return ret; | ||
179 | } | ||
180 | |||
181 | ced->name = "ostm"; | ||
182 | ced->features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_PERIODIC; | ||
183 | ced->set_state_shutdown = ostm_shutdown; | ||
184 | ced->set_state_periodic = ostm_set_periodic; | ||
185 | ced->set_state_oneshot = ostm_set_oneshot; | ||
186 | ced->set_next_event = ostm_clock_event_next; | ||
187 | ced->shift = 32; | ||
188 | ced->rating = 300; | ||
189 | ced->cpumask = cpumask_of(0); | ||
190 | clockevents_config_and_register(ced, rate, 0xf, 0xffffffff); | ||
191 | |||
192 | return 0; | ||
193 | } | ||
194 | |||
195 | static int __init ostm_init(struct device_node *np) | ||
196 | { | ||
197 | struct ostm_device *ostm; | ||
198 | int ret = -EFAULT; | ||
199 | struct clk *ostm_clk = NULL; | ||
200 | int irq; | ||
201 | unsigned long rate; | ||
202 | |||
203 | ostm = kzalloc(sizeof(*ostm), GFP_KERNEL); | ||
204 | if (!ostm) | ||
205 | return -ENOMEM; | ||
206 | |||
207 | ostm->base = of_iomap(np, 0); | ||
208 | if (!ostm->base) { | ||
209 | pr_err("ostm: failed to remap I/O memory\n"); | ||
210 | goto err; | ||
211 | } | ||
212 | |||
213 | irq = irq_of_parse_and_map(np, 0); | ||
214 | if (irq < 0) { | ||
215 | pr_err("ostm: Failed to get irq\n"); | ||
216 | goto err; | ||
217 | } | ||
218 | |||
219 | ostm_clk = of_clk_get(np, 0); | ||
220 | if (IS_ERR(ostm_clk)) { | ||
221 | pr_err("ostm: Failed to get clock\n"); | ||
222 | ostm_clk = NULL; | ||
223 | goto err; | ||
224 | } | ||
225 | |||
226 | ret = clk_prepare_enable(ostm_clk); | ||
227 | if (ret) { | ||
228 | pr_err("ostm: Failed to enable clock\n"); | ||
229 | goto err; | ||
230 | } | ||
231 | |||
232 | rate = clk_get_rate(ostm_clk); | ||
233 | ostm->ticks_per_jiffy = (rate + HZ / 2) / HZ; | ||
234 | |||
235 | /* | ||
236 | * First probed device will be used as system clocksource. Any | ||
237 | * additional devices will be used as clock events. | ||
238 | */ | ||
239 | if (!system_clock) { | ||
240 | ret = ostm_init_clksrc(ostm, rate); | ||
241 | |||
242 | if (!ret) { | ||
243 | ostm_init_sched_clock(ostm, rate); | ||
244 | pr_info("ostm: used for clocksource\n"); | ||
245 | } | ||
246 | |||
247 | } else { | ||
248 | ret = ostm_init_clkevt(ostm, irq, rate); | ||
249 | |||
250 | if (!ret) | ||
251 | pr_info("ostm: used for clock events\n"); | ||
252 | } | ||
253 | |||
254 | err: | ||
255 | if (ret) { | ||
256 | clk_disable_unprepare(ostm_clk); | ||
257 | iounmap(ostm->base); | ||
258 | kfree(ostm); | ||
259 | return ret; | ||
260 | } | ||
261 | |||
262 | return 0; | ||
263 | } | ||
264 | |||
265 | CLOCKSOURCE_OF_DECLARE(ostm, "renesas,ostm", ostm_init); | ||
diff --git a/drivers/clocksource/tcb_clksrc.c b/drivers/clocksource/tcb_clksrc.c index d4ca9962a759..745844ee973e 100644 --- a/drivers/clocksource/tcb_clksrc.c +++ b/drivers/clocksource/tcb_clksrc.c | |||
@@ -10,6 +10,7 @@ | |||
10 | #include <linux/io.h> | 10 | #include <linux/io.h> |
11 | #include <linux/platform_device.h> | 11 | #include <linux/platform_device.h> |
12 | #include <linux/atmel_tc.h> | 12 | #include <linux/atmel_tc.h> |
13 | #include <linux/sched_clock.h> | ||
13 | 14 | ||
14 | 15 | ||
15 | /* | 16 | /* |
@@ -56,11 +57,16 @@ static u64 tc_get_cycles(struct clocksource *cs) | |||
56 | return (upper << 16) | lower; | 57 | return (upper << 16) | lower; |
57 | } | 58 | } |
58 | 59 | ||
59 | static u64 tc_get_cycles32(struct clocksource *cs) | 60 | static u32 tc_get_cv32(void) |
60 | { | 61 | { |
61 | return __raw_readl(tcaddr + ATMEL_TC_REG(0, CV)); | 62 | return __raw_readl(tcaddr + ATMEL_TC_REG(0, CV)); |
62 | } | 63 | } |
63 | 64 | ||
65 | static u64 tc_get_cycles32(struct clocksource *cs) | ||
66 | { | ||
67 | return tc_get_cv32(); | ||
68 | } | ||
69 | |||
64 | static struct clocksource clksrc = { | 70 | static struct clocksource clksrc = { |
65 | .name = "tcb_clksrc", | 71 | .name = "tcb_clksrc", |
66 | .rating = 200, | 72 | .rating = 200, |
@@ -69,6 +75,11 @@ static struct clocksource clksrc = { | |||
69 | .flags = CLOCK_SOURCE_IS_CONTINUOUS, | 75 | .flags = CLOCK_SOURCE_IS_CONTINUOUS, |
70 | }; | 76 | }; |
71 | 77 | ||
78 | static u64 notrace tc_read_sched_clock(void) | ||
79 | { | ||
80 | return tc_get_cv32(); | ||
81 | } | ||
82 | |||
72 | #ifdef CONFIG_GENERIC_CLOCKEVENTS | 83 | #ifdef CONFIG_GENERIC_CLOCKEVENTS |
73 | 84 | ||
74 | struct tc_clkevt_device { | 85 | struct tc_clkevt_device { |
@@ -339,6 +350,9 @@ static int __init tcb_clksrc_init(void) | |||
339 | clksrc.read = tc_get_cycles32; | 350 | clksrc.read = tc_get_cycles32; |
340 | /* setup ony channel 0 */ | 351 | /* setup ony channel 0 */ |
341 | tcb_setup_single_chan(tc, best_divisor_idx); | 352 | tcb_setup_single_chan(tc, best_divisor_idx); |
353 | |||
354 | /* register sched_clock on chips with single 32 bit counter */ | ||
355 | sched_clock_register(tc_read_sched_clock, 32, divided_rate); | ||
342 | } else { | 356 | } else { |
343 | /* tclib will give us three clocks no matter what the | 357 | /* tclib will give us three clocks no matter what the |
344 | * underlying platform supports. | 358 | * underlying platform supports. |
diff --git a/drivers/clocksource/timer-gemini.c b/drivers/clocksource/timer-gemini.c new file mode 100644 index 000000000000..dda27b7bf1a1 --- /dev/null +++ b/drivers/clocksource/timer-gemini.c | |||
@@ -0,0 +1,277 @@ | |||
1 | /* | ||
2 | * Gemini timer driver | ||
3 | * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org> | ||
4 | * | ||
5 | * Based on a rewrite of arch/arm/mach-gemini/timer.c: | ||
6 | * Copyright (C) 2001-2006 Storlink, Corp. | ||
7 | * Copyright (C) 2008-2009 Paulius Zaleckas <paulius.zaleckas@teltonika.lt> | ||
8 | */ | ||
9 | #include <linux/interrupt.h> | ||
10 | #include <linux/io.h> | ||
11 | #include <linux/of.h> | ||
12 | #include <linux/of_address.h> | ||
13 | #include <linux/of_irq.h> | ||
14 | #include <linux/mfd/syscon.h> | ||
15 | #include <linux/regmap.h> | ||
16 | #include <linux/clockchips.h> | ||
17 | #include <linux/clocksource.h> | ||
18 | #include <linux/sched_clock.h> | ||
19 | |||
20 | /* | ||
21 | * Relevant registers in the global syscon | ||
22 | */ | ||
23 | #define GLOBAL_STATUS 0x04 | ||
24 | #define CPU_AHB_RATIO_MASK (0x3 << 18) | ||
25 | #define CPU_AHB_1_1 (0x0 << 18) | ||
26 | #define CPU_AHB_3_2 (0x1 << 18) | ||
27 | #define CPU_AHB_24_13 (0x2 << 18) | ||
28 | #define CPU_AHB_2_1 (0x3 << 18) | ||
29 | #define REG_TO_AHB_SPEED(reg) ((((reg) >> 15) & 0x7) * 10 + 130) | ||
30 | |||
31 | /* | ||
32 | * Register definitions for the timers | ||
33 | */ | ||
34 | #define TIMER1_COUNT (0x00) | ||
35 | #define TIMER1_LOAD (0x04) | ||
36 | #define TIMER1_MATCH1 (0x08) | ||
37 | #define TIMER1_MATCH2 (0x0c) | ||
38 | #define TIMER2_COUNT (0x10) | ||
39 | #define TIMER2_LOAD (0x14) | ||
40 | #define TIMER2_MATCH1 (0x18) | ||
41 | #define TIMER2_MATCH2 (0x1c) | ||
42 | #define TIMER3_COUNT (0x20) | ||
43 | #define TIMER3_LOAD (0x24) | ||
44 | #define TIMER3_MATCH1 (0x28) | ||
45 | #define TIMER3_MATCH2 (0x2c) | ||
46 | #define TIMER_CR (0x30) | ||
47 | #define TIMER_INTR_STATE (0x34) | ||
48 | #define TIMER_INTR_MASK (0x38) | ||
49 | |||
50 | #define TIMER_1_CR_ENABLE (1 << 0) | ||
51 | #define TIMER_1_CR_CLOCK (1 << 1) | ||
52 | #define TIMER_1_CR_INT (1 << 2) | ||
53 | #define TIMER_2_CR_ENABLE (1 << 3) | ||
54 | #define TIMER_2_CR_CLOCK (1 << 4) | ||
55 | #define TIMER_2_CR_INT (1 << 5) | ||
56 | #define TIMER_3_CR_ENABLE (1 << 6) | ||
57 | #define TIMER_3_CR_CLOCK (1 << 7) | ||
58 | #define TIMER_3_CR_INT (1 << 8) | ||
59 | #define TIMER_1_CR_UPDOWN (1 << 9) | ||
60 | #define TIMER_2_CR_UPDOWN (1 << 10) | ||
61 | #define TIMER_3_CR_UPDOWN (1 << 11) | ||
62 | #define TIMER_DEFAULT_FLAGS (TIMER_1_CR_UPDOWN | \ | ||
63 | TIMER_3_CR_ENABLE | \ | ||
64 | TIMER_3_CR_UPDOWN) | ||
65 | |||
66 | #define TIMER_1_INT_MATCH1 (1 << 0) | ||
67 | #define TIMER_1_INT_MATCH2 (1 << 1) | ||
68 | #define TIMER_1_INT_OVERFLOW (1 << 2) | ||
69 | #define TIMER_2_INT_MATCH1 (1 << 3) | ||
70 | #define TIMER_2_INT_MATCH2 (1 << 4) | ||
71 | #define TIMER_2_INT_OVERFLOW (1 << 5) | ||
72 | #define TIMER_3_INT_MATCH1 (1 << 6) | ||
73 | #define TIMER_3_INT_MATCH2 (1 << 7) | ||
74 | #define TIMER_3_INT_OVERFLOW (1 << 8) | ||
75 | #define TIMER_INT_ALL_MASK 0x1ff | ||
76 | |||
77 | static unsigned int tick_rate; | ||
78 | static void __iomem *base; | ||
79 | |||
80 | static u64 notrace gemini_read_sched_clock(void) | ||
81 | { | ||
82 | return readl(base + TIMER3_COUNT); | ||
83 | } | ||
84 | |||
85 | static int gemini_timer_set_next_event(unsigned long cycles, | ||
86 | struct clock_event_device *evt) | ||
87 | { | ||
88 | u32 cr; | ||
89 | |||
90 | /* Setup the match register */ | ||
91 | cr = readl(base + TIMER1_COUNT); | ||
92 | writel(cr + cycles, base + TIMER1_MATCH1); | ||
93 | if (readl(base + TIMER1_COUNT) - cr > cycles) | ||
94 | return -ETIME; | ||
95 | |||
96 | return 0; | ||
97 | } | ||
98 | |||
99 | static int gemini_timer_shutdown(struct clock_event_device *evt) | ||
100 | { | ||
101 | u32 cr; | ||
102 | |||
103 | /* | ||
104 | * Disable also for oneshot: the set_next() call will arm the timer | ||
105 | * instead. | ||
106 | */ | ||
107 | /* Stop timer and interrupt. */ | ||
108 | cr = readl(base + TIMER_CR); | ||
109 | cr &= ~(TIMER_1_CR_ENABLE | TIMER_1_CR_INT); | ||
110 | writel(cr, base + TIMER_CR); | ||
111 | |||
112 | /* Setup counter start from 0 */ | ||
113 | writel(0, base + TIMER1_COUNT); | ||
114 | writel(0, base + TIMER1_LOAD); | ||
115 | |||
116 | /* enable interrupt */ | ||
117 | cr = readl(base + TIMER_INTR_MASK); | ||
118 | cr &= ~(TIMER_1_INT_OVERFLOW | TIMER_1_INT_MATCH2); | ||
119 | cr |= TIMER_1_INT_MATCH1; | ||
120 | writel(cr, base + TIMER_INTR_MASK); | ||
121 | |||
122 | /* start the timer */ | ||
123 | cr = readl(base + TIMER_CR); | ||
124 | cr |= TIMER_1_CR_ENABLE; | ||
125 | writel(cr, base + TIMER_CR); | ||
126 | |||
127 | return 0; | ||
128 | } | ||
129 | |||
130 | static int gemini_timer_set_periodic(struct clock_event_device *evt) | ||
131 | { | ||
132 | u32 period = DIV_ROUND_CLOSEST(tick_rate, HZ); | ||
133 | u32 cr; | ||
134 | |||
135 | /* Stop timer and interrupt */ | ||
136 | cr = readl(base + TIMER_CR); | ||
137 | cr &= ~(TIMER_1_CR_ENABLE | TIMER_1_CR_INT); | ||
138 | writel(cr, base + TIMER_CR); | ||
139 | |||
140 | /* Setup timer to fire at 1/HT intervals. */ | ||
141 | cr = 0xffffffff - (period - 1); | ||
142 | writel(cr, base + TIMER1_COUNT); | ||
143 | writel(cr, base + TIMER1_LOAD); | ||
144 | |||
145 | /* enable interrupt on overflow */ | ||
146 | cr = readl(base + TIMER_INTR_MASK); | ||
147 | cr &= ~(TIMER_1_INT_MATCH1 | TIMER_1_INT_MATCH2); | ||
148 | cr |= TIMER_1_INT_OVERFLOW; | ||
149 | writel(cr, base + TIMER_INTR_MASK); | ||
150 | |||
151 | /* Start the timer */ | ||
152 | cr = readl(base + TIMER_CR); | ||
153 | cr |= TIMER_1_CR_ENABLE; | ||
154 | cr |= TIMER_1_CR_INT; | ||
155 | writel(cr, base + TIMER_CR); | ||
156 | |||
157 | return 0; | ||
158 | } | ||
159 | |||
160 | /* Use TIMER1 as clock event */ | ||
161 | static struct clock_event_device gemini_clockevent = { | ||
162 | .name = "TIMER1", | ||
163 | /* Reasonably fast and accurate clock event */ | ||
164 | .rating = 300, | ||
165 | .shift = 32, | ||
166 | .features = CLOCK_EVT_FEAT_PERIODIC | | ||
167 | CLOCK_EVT_FEAT_ONESHOT, | ||
168 | .set_next_event = gemini_timer_set_next_event, | ||
169 | .set_state_shutdown = gemini_timer_shutdown, | ||
170 | .set_state_periodic = gemini_timer_set_periodic, | ||
171 | .set_state_oneshot = gemini_timer_shutdown, | ||
172 | .tick_resume = gemini_timer_shutdown, | ||
173 | }; | ||
174 | |||
175 | /* | ||
176 | * IRQ handler for the timer | ||
177 | */ | ||
178 | static irqreturn_t gemini_timer_interrupt(int irq, void *dev_id) | ||
179 | { | ||
180 | struct clock_event_device *evt = &gemini_clockevent; | ||
181 | |||
182 | evt->event_handler(evt); | ||
183 | return IRQ_HANDLED; | ||
184 | } | ||
185 | |||
186 | static struct irqaction gemini_timer_irq = { | ||
187 | .name = "Gemini Timer Tick", | ||
188 | .flags = IRQF_TIMER, | ||
189 | .handler = gemini_timer_interrupt, | ||
190 | }; | ||
191 | |||
192 | static int __init gemini_timer_of_init(struct device_node *np) | ||
193 | { | ||
194 | static struct regmap *map; | ||
195 | int irq; | ||
196 | int ret; | ||
197 | u32 val; | ||
198 | |||
199 | map = syscon_regmap_lookup_by_phandle(np, "syscon"); | ||
200 | if (IS_ERR(map)) { | ||
201 | pr_err("Can't get regmap for syscon handle"); | ||
202 | return -ENODEV; | ||
203 | } | ||
204 | ret = regmap_read(map, GLOBAL_STATUS, &val); | ||
205 | if (ret) { | ||
206 | pr_err("Can't read syscon status register"); | ||
207 | return -ENXIO; | ||
208 | } | ||
209 | |||
210 | base = of_iomap(np, 0); | ||
211 | if (!base) { | ||
212 | pr_err("Can't remap registers"); | ||
213 | return -ENXIO; | ||
214 | } | ||
215 | /* IRQ for timer 1 */ | ||
216 | irq = irq_of_parse_and_map(np, 0); | ||
217 | if (irq <= 0) { | ||
218 | pr_err("Can't parse IRQ"); | ||
219 | return -EINVAL; | ||
220 | } | ||
221 | |||
222 | tick_rate = REG_TO_AHB_SPEED(val) * 1000000; | ||
223 | printk(KERN_INFO "Bus: %dMHz", tick_rate / 1000000); | ||
224 | |||
225 | tick_rate /= 6; /* APB bus run AHB*(1/6) */ | ||
226 | |||
227 | switch (val & CPU_AHB_RATIO_MASK) { | ||
228 | case CPU_AHB_1_1: | ||
229 | printk(KERN_CONT "(1/1)\n"); | ||
230 | break; | ||
231 | case CPU_AHB_3_2: | ||
232 | printk(KERN_CONT "(3/2)\n"); | ||
233 | break; | ||
234 | case CPU_AHB_24_13: | ||
235 | printk(KERN_CONT "(24/13)\n"); | ||
236 | break; | ||
237 | case CPU_AHB_2_1: | ||
238 | printk(KERN_CONT "(2/1)\n"); | ||
239 | break; | ||
240 | } | ||
241 | |||
242 | /* | ||
243 | * Reset the interrupt mask and status | ||
244 | */ | ||
245 | writel(TIMER_INT_ALL_MASK, base + TIMER_INTR_MASK); | ||
246 | writel(0, base + TIMER_INTR_STATE); | ||
247 | writel(TIMER_DEFAULT_FLAGS, base + TIMER_CR); | ||
248 | |||
249 | /* | ||
250 | * Setup free-running clocksource timer (interrupts | ||
251 | * disabled.) | ||
252 | */ | ||
253 | writel(0, base + TIMER3_COUNT); | ||
254 | writel(0, base + TIMER3_LOAD); | ||
255 | writel(0, base + TIMER3_MATCH1); | ||
256 | writel(0, base + TIMER3_MATCH2); | ||
257 | clocksource_mmio_init(base + TIMER3_COUNT, | ||
258 | "gemini_clocksource", tick_rate, | ||
259 | 300, 32, clocksource_mmio_readl_up); | ||
260 | sched_clock_register(gemini_read_sched_clock, 32, tick_rate); | ||
261 | |||
262 | /* | ||
263 | * Setup clockevent timer (interrupt-driven.) | ||
264 | */ | ||
265 | writel(0, base + TIMER1_COUNT); | ||
266 | writel(0, base + TIMER1_LOAD); | ||
267 | writel(0, base + TIMER1_MATCH1); | ||
268 | writel(0, base + TIMER1_MATCH2); | ||
269 | setup_irq(irq, &gemini_timer_irq); | ||
270 | gemini_clockevent.cpumask = cpumask_of(0); | ||
271 | clockevents_config_and_register(&gemini_clockevent, tick_rate, | ||
272 | 1, 0xffffffff); | ||
273 | |||
274 | return 0; | ||
275 | } | ||
276 | CLOCKSOURCE_OF_DECLARE(nomadik_mtu, "cortina,gemini-timer", | ||
277 | gemini_timer_of_init); | ||
diff --git a/fs/proc/base.c b/fs/proc/base.c index 87c9a9aacda3..b1f7d30e96c2 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c | |||
@@ -2179,7 +2179,7 @@ static const struct file_operations proc_map_files_operations = { | |||
2179 | .llseek = generic_file_llseek, | 2179 | .llseek = generic_file_llseek, |
2180 | }; | 2180 | }; |
2181 | 2181 | ||
2182 | #ifdef CONFIG_CHECKPOINT_RESTORE | 2182 | #if defined(CONFIG_CHECKPOINT_RESTORE) && defined(CONFIG_POSIX_TIMERS) |
2183 | struct timers_private { | 2183 | struct timers_private { |
2184 | struct pid *pid; | 2184 | struct pid *pid; |
2185 | struct task_struct *task; | 2185 | struct task_struct *task; |
@@ -2936,7 +2936,7 @@ static const struct pid_entry tgid_base_stuff[] = { | |||
2936 | REG("projid_map", S_IRUGO|S_IWUSR, proc_projid_map_operations), | 2936 | REG("projid_map", S_IRUGO|S_IWUSR, proc_projid_map_operations), |
2937 | REG("setgroups", S_IRUGO|S_IWUSR, proc_setgroups_operations), | 2937 | REG("setgroups", S_IRUGO|S_IWUSR, proc_setgroups_operations), |
2938 | #endif | 2938 | #endif |
2939 | #ifdef CONFIG_CHECKPOINT_RESTORE | 2939 | #if defined(CONFIG_CHECKPOINT_RESTORE) && defined(CONFIG_POSIX_TIMERS) |
2940 | REG("timers", S_IRUGO, proc_timers_operations), | 2940 | REG("timers", S_IRUGO, proc_timers_operations), |
2941 | #endif | 2941 | #endif |
2942 | REG("timerslack_ns", S_IRUGO|S_IWUGO, proc_pid_set_timerslack_ns_operations), | 2942 | REG("timerslack_ns", S_IRUGO|S_IWUGO, proc_pid_set_timerslack_ns_operations), |
diff --git a/fs/timerfd.c b/fs/timerfd.c index c173cc196175..384fa759a563 100644 --- a/fs/timerfd.c +++ b/fs/timerfd.c | |||
@@ -40,6 +40,7 @@ struct timerfd_ctx { | |||
40 | short unsigned settime_flags; /* to show in fdinfo */ | 40 | short unsigned settime_flags; /* to show in fdinfo */ |
41 | struct rcu_head rcu; | 41 | struct rcu_head rcu; |
42 | struct list_head clist; | 42 | struct list_head clist; |
43 | spinlock_t cancel_lock; | ||
43 | bool might_cancel; | 44 | bool might_cancel; |
44 | }; | 45 | }; |
45 | 46 | ||
@@ -112,7 +113,7 @@ void timerfd_clock_was_set(void) | |||
112 | rcu_read_unlock(); | 113 | rcu_read_unlock(); |
113 | } | 114 | } |
114 | 115 | ||
115 | static void timerfd_remove_cancel(struct timerfd_ctx *ctx) | 116 | static void __timerfd_remove_cancel(struct timerfd_ctx *ctx) |
116 | { | 117 | { |
117 | if (ctx->might_cancel) { | 118 | if (ctx->might_cancel) { |
118 | ctx->might_cancel = false; | 119 | ctx->might_cancel = false; |
@@ -122,6 +123,13 @@ static void timerfd_remove_cancel(struct timerfd_ctx *ctx) | |||
122 | } | 123 | } |
123 | } | 124 | } |
124 | 125 | ||
126 | static void timerfd_remove_cancel(struct timerfd_ctx *ctx) | ||
127 | { | ||
128 | spin_lock(&ctx->cancel_lock); | ||
129 | __timerfd_remove_cancel(ctx); | ||
130 | spin_unlock(&ctx->cancel_lock); | ||
131 | } | ||
132 | |||
125 | static bool timerfd_canceled(struct timerfd_ctx *ctx) | 133 | static bool timerfd_canceled(struct timerfd_ctx *ctx) |
126 | { | 134 | { |
127 | if (!ctx->might_cancel || ctx->moffs != KTIME_MAX) | 135 | if (!ctx->might_cancel || ctx->moffs != KTIME_MAX) |
@@ -132,6 +140,7 @@ static bool timerfd_canceled(struct timerfd_ctx *ctx) | |||
132 | 140 | ||
133 | static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags) | 141 | static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags) |
134 | { | 142 | { |
143 | spin_lock(&ctx->cancel_lock); | ||
135 | if ((ctx->clockid == CLOCK_REALTIME || | 144 | if ((ctx->clockid == CLOCK_REALTIME || |
136 | ctx->clockid == CLOCK_REALTIME_ALARM) && | 145 | ctx->clockid == CLOCK_REALTIME_ALARM) && |
137 | (flags & TFD_TIMER_ABSTIME) && (flags & TFD_TIMER_CANCEL_ON_SET)) { | 146 | (flags & TFD_TIMER_ABSTIME) && (flags & TFD_TIMER_CANCEL_ON_SET)) { |
@@ -141,9 +150,10 @@ static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags) | |||
141 | list_add_rcu(&ctx->clist, &cancel_list); | 150 | list_add_rcu(&ctx->clist, &cancel_list); |
142 | spin_unlock(&cancel_lock); | 151 | spin_unlock(&cancel_lock); |
143 | } | 152 | } |
144 | } else if (ctx->might_cancel) { | 153 | } else { |
145 | timerfd_remove_cancel(ctx); | 154 | __timerfd_remove_cancel(ctx); |
146 | } | 155 | } |
156 | spin_unlock(&ctx->cancel_lock); | ||
147 | } | 157 | } |
148 | 158 | ||
149 | static ktime_t timerfd_get_remaining(struct timerfd_ctx *ctx) | 159 | static ktime_t timerfd_get_remaining(struct timerfd_ctx *ctx) |
@@ -400,6 +410,7 @@ SYSCALL_DEFINE2(timerfd_create, int, clockid, int, flags) | |||
400 | return -ENOMEM; | 410 | return -ENOMEM; |
401 | 411 | ||
402 | init_waitqueue_head(&ctx->wqh); | 412 | init_waitqueue_head(&ctx->wqh); |
413 | spin_lock_init(&ctx->cancel_lock); | ||
403 | ctx->clockid = clockid; | 414 | ctx->clockid = clockid; |
404 | 415 | ||
405 | if (isalarm(ctx)) | 416 | if (isalarm(ctx)) |
diff --git a/include/linux/clockchips.h b/include/linux/clockchips.h index 0d442e34c349..5d3053c34fb3 100644 --- a/include/linux/clockchips.h +++ b/include/linux/clockchips.h | |||
@@ -224,4 +224,13 @@ static inline void tick_setup_hrtimer_broadcast(void) { } | |||
224 | 224 | ||
225 | #endif /* !CONFIG_GENERIC_CLOCKEVENTS */ | 225 | #endif /* !CONFIG_GENERIC_CLOCKEVENTS */ |
226 | 226 | ||
227 | #define CLOCKEVENT_OF_DECLARE(name, compat, fn) \ | ||
228 | OF_DECLARE_1_RET(clkevt, name, compat, fn) | ||
229 | |||
230 | #ifdef CONFIG_CLKEVT_PROBE | ||
231 | extern int clockevent_probe(void); | ||
232 | #els | ||
233 | static inline int clockevent_probe(void) { return 0; } | ||
234 | #endif | ||
235 | |||
227 | #endif /* _LINUX_CLOCKCHIPS_H */ | 236 | #endif /* _LINUX_CLOCKCHIPS_H */ |
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index b3d2c1a89ac4..96f1e88b767c 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h | |||
@@ -649,11 +649,15 @@ static inline size_t cpumask_size(void) | |||
649 | * used. Please use this_cpu_cpumask_var_t in those cases. The direct use | 649 | * used. Please use this_cpu_cpumask_var_t in those cases. The direct use |
650 | * of this_cpu_ptr() or this_cpu_read() will lead to failures when the | 650 | * of this_cpu_ptr() or this_cpu_read() will lead to failures when the |
651 | * other type of cpumask_var_t implementation is configured. | 651 | * other type of cpumask_var_t implementation is configured. |
652 | * | ||
653 | * Please also note that __cpumask_var_read_mostly can be used to declare | ||
654 | * a cpumask_var_t variable itself (not its content) as read mostly. | ||
652 | */ | 655 | */ |
653 | #ifdef CONFIG_CPUMASK_OFFSTACK | 656 | #ifdef CONFIG_CPUMASK_OFFSTACK |
654 | typedef struct cpumask *cpumask_var_t; | 657 | typedef struct cpumask *cpumask_var_t; |
655 | 658 | ||
656 | #define this_cpu_cpumask_var_ptr(x) this_cpu_read(x) | 659 | #define this_cpu_cpumask_var_ptr(x) this_cpu_read(x) |
660 | #define __cpumask_var_read_mostly __read_mostly | ||
657 | 661 | ||
658 | bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node); | 662 | bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node); |
659 | bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags); | 663 | bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags); |
@@ -667,6 +671,7 @@ void free_bootmem_cpumask_var(cpumask_var_t mask); | |||
667 | typedef struct cpumask cpumask_var_t[1]; | 671 | typedef struct cpumask cpumask_var_t[1]; |
668 | 672 | ||
669 | #define this_cpu_cpumask_var_ptr(x) this_cpu_ptr(x) | 673 | #define this_cpu_cpumask_var_ptr(x) this_cpu_ptr(x) |
674 | #define __cpumask_var_read_mostly | ||
670 | 675 | ||
671 | static inline bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) | 676 | static inline bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags) |
672 | { | 677 | { |
diff --git a/include/linux/delay.h b/include/linux/delay.h index a6ecb34cf547..2ecb3c46b20a 100644 --- a/include/linux/delay.h +++ b/include/linux/delay.h | |||
@@ -5,6 +5,17 @@ | |||
5 | * Copyright (C) 1993 Linus Torvalds | 5 | * Copyright (C) 1993 Linus Torvalds |
6 | * | 6 | * |
7 | * Delay routines, using a pre-computed "loops_per_jiffy" value. | 7 | * Delay routines, using a pre-computed "loops_per_jiffy" value. |
8 | * | ||
9 | * Please note that ndelay(), udelay() and mdelay() may return early for | ||
10 | * several reasons: | ||
11 | * 1. computed loops_per_jiffy too low (due to the time taken to | ||
12 | * execute the timer interrupt.) | ||
13 | * 2. cache behaviour affecting the time it takes to execute the | ||
14 | * loop function. | ||
15 | * 3. CPU clock rate changes. | ||
16 | * | ||
17 | * Please see this thread: | ||
18 | * http://lists.openwall.net/linux-kernel/2011/01/09/56 | ||
8 | */ | 19 | */ |
9 | 20 | ||
10 | #include <linux/kernel.h> | 21 | #include <linux/kernel.h> |
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h index cdab81ba29f8..e52b427223ba 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h | |||
@@ -88,12 +88,6 @@ enum hrtimer_restart { | |||
88 | * @base: pointer to the timer base (per cpu and per clock) | 88 | * @base: pointer to the timer base (per cpu and per clock) |
89 | * @state: state information (See bit values above) | 89 | * @state: state information (See bit values above) |
90 | * @is_rel: Set if the timer was armed relative | 90 | * @is_rel: Set if the timer was armed relative |
91 | * @start_pid: timer statistics field to store the pid of the task which | ||
92 | * started the timer | ||
93 | * @start_site: timer statistics field to store the site where the timer | ||
94 | * was started | ||
95 | * @start_comm: timer statistics field to store the name of the process which | ||
96 | * started the timer | ||
97 | * | 91 | * |
98 | * The hrtimer structure must be initialized by hrtimer_init() | 92 | * The hrtimer structure must be initialized by hrtimer_init() |
99 | */ | 93 | */ |
@@ -104,11 +98,6 @@ struct hrtimer { | |||
104 | struct hrtimer_clock_base *base; | 98 | struct hrtimer_clock_base *base; |
105 | u8 state; | 99 | u8 state; |
106 | u8 is_rel; | 100 | u8 is_rel; |
107 | #ifdef CONFIG_TIMER_STATS | ||
108 | int start_pid; | ||
109 | void *start_site; | ||
110 | char start_comm[16]; | ||
111 | #endif | ||
112 | }; | 101 | }; |
113 | 102 | ||
114 | /** | 103 | /** |
diff --git a/include/linux/init_task.h b/include/linux/init_task.h index 325f649d77ff..3a85d61f7614 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h | |||
@@ -42,6 +42,27 @@ extern struct fs_struct init_fs; | |||
42 | #define INIT_PREV_CPUTIME(x) | 42 | #define INIT_PREV_CPUTIME(x) |
43 | #endif | 43 | #endif |
44 | 44 | ||
45 | #ifdef CONFIG_POSIX_TIMERS | ||
46 | #define INIT_POSIX_TIMERS(s) \ | ||
47 | .posix_timers = LIST_HEAD_INIT(s.posix_timers), | ||
48 | #define INIT_CPU_TIMERS(s) \ | ||
49 | .cpu_timers = { \ | ||
50 | LIST_HEAD_INIT(s.cpu_timers[0]), \ | ||
51 | LIST_HEAD_INIT(s.cpu_timers[1]), \ | ||
52 | LIST_HEAD_INIT(s.cpu_timers[2]), \ | ||
53 | }, | ||
54 | #define INIT_CPUTIMER(s) \ | ||
55 | .cputimer = { \ | ||
56 | .cputime_atomic = INIT_CPUTIME_ATOMIC, \ | ||
57 | .running = false, \ | ||
58 | .checking_timer = false, \ | ||
59 | }, | ||
60 | #else | ||
61 | #define INIT_POSIX_TIMERS(s) | ||
62 | #define INIT_CPU_TIMERS(s) | ||
63 | #define INIT_CPUTIMER(s) | ||
64 | #endif | ||
65 | |||
45 | #define INIT_SIGNALS(sig) { \ | 66 | #define INIT_SIGNALS(sig) { \ |
46 | .nr_threads = 1, \ | 67 | .nr_threads = 1, \ |
47 | .thread_head = LIST_HEAD_INIT(init_task.thread_node), \ | 68 | .thread_head = LIST_HEAD_INIT(init_task.thread_node), \ |
@@ -49,14 +70,10 @@ extern struct fs_struct init_fs; | |||
49 | .shared_pending = { \ | 70 | .shared_pending = { \ |
50 | .list = LIST_HEAD_INIT(sig.shared_pending.list), \ | 71 | .list = LIST_HEAD_INIT(sig.shared_pending.list), \ |
51 | .signal = {{0}}}, \ | 72 | .signal = {{0}}}, \ |
52 | .posix_timers = LIST_HEAD_INIT(sig.posix_timers), \ | 73 | INIT_POSIX_TIMERS(sig) \ |
53 | .cpu_timers = INIT_CPU_TIMERS(sig.cpu_timers), \ | 74 | INIT_CPU_TIMERS(sig) \ |
54 | .rlim = INIT_RLIMITS, \ | 75 | .rlim = INIT_RLIMITS, \ |
55 | .cputimer = { \ | 76 | INIT_CPUTIMER(sig) \ |
56 | .cputime_atomic = INIT_CPUTIME_ATOMIC, \ | ||
57 | .running = false, \ | ||
58 | .checking_timer = false, \ | ||
59 | }, \ | ||
60 | INIT_PREV_CPUTIME(sig) \ | 77 | INIT_PREV_CPUTIME(sig) \ |
61 | .cred_guard_mutex = \ | 78 | .cred_guard_mutex = \ |
62 | __MUTEX_INITIALIZER(sig.cred_guard_mutex), \ | 79 | __MUTEX_INITIALIZER(sig.cred_guard_mutex), \ |
@@ -247,7 +264,7 @@ extern struct task_group root_task_group; | |||
247 | .blocked = {{0}}, \ | 264 | .blocked = {{0}}, \ |
248 | .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \ | 265 | .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \ |
249 | .journal_info = NULL, \ | 266 | .journal_info = NULL, \ |
250 | .cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \ | 267 | INIT_CPU_TIMERS(tsk) \ |
251 | .pi_lock = __RAW_SPIN_LOCK_UNLOCKED(tsk.pi_lock), \ | 268 | .pi_lock = __RAW_SPIN_LOCK_UNLOCKED(tsk.pi_lock), \ |
252 | .timer_slack_ns = 50000, /* 50 usec default slack */ \ | 269 | .timer_slack_ns = 50000, /* 50 usec default slack */ \ |
253 | .pids = { \ | 270 | .pids = { \ |
@@ -274,13 +291,6 @@ extern struct task_group root_task_group; | |||
274 | } | 291 | } |
275 | 292 | ||
276 | 293 | ||
277 | #define INIT_CPU_TIMERS(cpu_timers) \ | ||
278 | { \ | ||
279 | LIST_HEAD_INIT(cpu_timers[0]), \ | ||
280 | LIST_HEAD_INIT(cpu_timers[1]), \ | ||
281 | LIST_HEAD_INIT(cpu_timers[2]), \ | ||
282 | } | ||
283 | |||
284 | /* Attach to the init_task data structure for proper alignment */ | 294 | /* Attach to the init_task data structure for proper alignment */ |
285 | #define __init_task_data __attribute__((__section__(".data..init_task"))) | 295 | #define __init_task_data __attribute__((__section__(".data..init_task"))) |
286 | 296 | ||
diff --git a/include/linux/math64.h b/include/linux/math64.h index 6e8b5b270ffe..80690c96c734 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h | |||
@@ -133,6 +133,16 @@ __iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder) | |||
133 | return ret; | 133 | return ret; |
134 | } | 134 | } |
135 | 135 | ||
136 | #ifndef mul_u32_u32 | ||
137 | /* | ||
138 | * Many a GCC version messes this up and generates a 64x64 mult :-( | ||
139 | */ | ||
140 | static inline u64 mul_u32_u32(u32 a, u32 b) | ||
141 | { | ||
142 | return (u64)a * b; | ||
143 | } | ||
144 | #endif | ||
145 | |||
136 | #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) | 146 | #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) |
137 | 147 | ||
138 | #ifndef mul_u64_u32_shr | 148 | #ifndef mul_u64_u32_shr |
@@ -160,9 +170,9 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) | |||
160 | al = a; | 170 | al = a; |
161 | ah = a >> 32; | 171 | ah = a >> 32; |
162 | 172 | ||
163 | ret = ((u64)al * mul) >> shift; | 173 | ret = mul_u32_u32(al, mul) >> shift; |
164 | if (ah) | 174 | if (ah) |
165 | ret += ((u64)ah * mul) << (32 - shift); | 175 | ret += mul_u32_u32(ah, mul) << (32 - shift); |
166 | 176 | ||
167 | return ret; | 177 | return ret; |
168 | } | 178 | } |
@@ -186,10 +196,10 @@ static inline u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift) | |||
186 | a0.ll = a; | 196 | a0.ll = a; |
187 | b0.ll = b; | 197 | b0.ll = b; |
188 | 198 | ||
189 | rl.ll = (u64)a0.l.low * b0.l.low; | 199 | rl.ll = mul_u32_u32(a0.l.low, b0.l.low); |
190 | rm.ll = (u64)a0.l.low * b0.l.high; | 200 | rm.ll = mul_u32_u32(a0.l.low, b0.l.high); |
191 | rn.ll = (u64)a0.l.high * b0.l.low; | 201 | rn.ll = mul_u32_u32(a0.l.high, b0.l.low); |
192 | rh.ll = (u64)a0.l.high * b0.l.high; | 202 | rh.ll = mul_u32_u32(a0.l.high, b0.l.high); |
193 | 203 | ||
194 | /* | 204 | /* |
195 | * Each of these lines computes a 64-bit intermediate result into "c", | 205 | * Each of these lines computes a 64-bit intermediate result into "c", |
@@ -229,8 +239,8 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) | |||
229 | } u, rl, rh; | 239 | } u, rl, rh; |
230 | 240 | ||
231 | u.ll = a; | 241 | u.ll = a; |
232 | rl.ll = (u64)u.l.low * mul; | 242 | rl.ll = mul_u32_u32(u.l.low, mul); |
233 | rh.ll = (u64)u.l.high * mul + rl.l.high; | 243 | rh.ll = mul_u32_u32(u.l.high, mul) + rl.l.high; |
234 | 244 | ||
235 | /* Bits 32-63 of the result will be in rh.l.low. */ | 245 | /* Bits 32-63 of the result will be in rh.l.low. */ |
236 | rl.l.high = do_div(rh.ll, divisor); | 246 | rl.l.high = do_div(rh.ll, divisor); |
diff --git a/include/linux/sched.h b/include/linux/sched.h index ad3ec9ec61f7..6e4782eae076 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h | |||
@@ -734,13 +734,14 @@ struct signal_struct { | |||
734 | unsigned int is_child_subreaper:1; | 734 | unsigned int is_child_subreaper:1; |
735 | unsigned int has_child_subreaper:1; | 735 | unsigned int has_child_subreaper:1; |
736 | 736 | ||
737 | #ifdef CONFIG_POSIX_TIMERS | ||
738 | |||
737 | /* POSIX.1b Interval Timers */ | 739 | /* POSIX.1b Interval Timers */ |
738 | int posix_timer_id; | 740 | int posix_timer_id; |
739 | struct list_head posix_timers; | 741 | struct list_head posix_timers; |
740 | 742 | ||
741 | /* ITIMER_REAL timer for the process */ | 743 | /* ITIMER_REAL timer for the process */ |
742 | struct hrtimer real_timer; | 744 | struct hrtimer real_timer; |
743 | struct pid *leader_pid; | ||
744 | ktime_t it_real_incr; | 745 | ktime_t it_real_incr; |
745 | 746 | ||
746 | /* | 747 | /* |
@@ -759,12 +760,16 @@ struct signal_struct { | |||
759 | /* Earliest-expiration cache. */ | 760 | /* Earliest-expiration cache. */ |
760 | struct task_cputime cputime_expires; | 761 | struct task_cputime cputime_expires; |
761 | 762 | ||
763 | struct list_head cpu_timers[3]; | ||
764 | |||
765 | #endif | ||
766 | |||
767 | struct pid *leader_pid; | ||
768 | |||
762 | #ifdef CONFIG_NO_HZ_FULL | 769 | #ifdef CONFIG_NO_HZ_FULL |
763 | atomic_t tick_dep_mask; | 770 | atomic_t tick_dep_mask; |
764 | #endif | 771 | #endif |
765 | 772 | ||
766 | struct list_head cpu_timers[3]; | ||
767 | |||
768 | struct pid *tty_old_pgrp; | 773 | struct pid *tty_old_pgrp; |
769 | 774 | ||
770 | /* boolean value for session group leader */ | 775 | /* boolean value for session group leader */ |
@@ -1691,8 +1696,10 @@ struct task_struct { | |||
1691 | /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */ | 1696 | /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */ |
1692 | unsigned long min_flt, maj_flt; | 1697 | unsigned long min_flt, maj_flt; |
1693 | 1698 | ||
1699 | #ifdef CONFIG_POSIX_TIMERS | ||
1694 | struct task_cputime cputime_expires; | 1700 | struct task_cputime cputime_expires; |
1695 | struct list_head cpu_timers[3]; | 1701 | struct list_head cpu_timers[3]; |
1702 | #endif | ||
1696 | 1703 | ||
1697 | /* process credentials */ | 1704 | /* process credentials */ |
1698 | const struct cred __rcu *ptracer_cred; /* Tracer's credentials at attach */ | 1705 | const struct cred __rcu *ptracer_cred; /* Tracer's credentials at attach */ |
diff --git a/include/linux/timer.h b/include/linux/timer.h index 51d601f192d4..5a209b84fd9e 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h | |||
@@ -20,11 +20,6 @@ struct timer_list { | |||
20 | unsigned long data; | 20 | unsigned long data; |
21 | u32 flags; | 21 | u32 flags; |
22 | 22 | ||
23 | #ifdef CONFIG_TIMER_STATS | ||
24 | int start_pid; | ||
25 | void *start_site; | ||
26 | char start_comm[16]; | ||
27 | #endif | ||
28 | #ifdef CONFIG_LOCKDEP | 23 | #ifdef CONFIG_LOCKDEP |
29 | struct lockdep_map lockdep_map; | 24 | struct lockdep_map lockdep_map; |
30 | #endif | 25 | #endif |
@@ -197,46 +192,6 @@ extern int mod_timer_pending(struct timer_list *timer, unsigned long expires); | |||
197 | */ | 192 | */ |
198 | #define NEXT_TIMER_MAX_DELTA ((1UL << 30) - 1) | 193 | #define NEXT_TIMER_MAX_DELTA ((1UL << 30) - 1) |
199 | 194 | ||
200 | /* | ||
201 | * Timer-statistics info: | ||
202 | */ | ||
203 | #ifdef CONFIG_TIMER_STATS | ||
204 | |||
205 | extern int timer_stats_active; | ||
206 | |||
207 | extern void init_timer_stats(void); | ||
208 | |||
209 | extern void timer_stats_update_stats(void *timer, pid_t pid, void *startf, | ||
210 | void *timerf, char *comm, u32 flags); | ||
211 | |||
212 | extern void __timer_stats_timer_set_start_info(struct timer_list *timer, | ||
213 | void *addr); | ||
214 | |||
215 | static inline void timer_stats_timer_set_start_info(struct timer_list *timer) | ||
216 | { | ||
217 | if (likely(!timer_stats_active)) | ||
218 | return; | ||
219 | __timer_stats_timer_set_start_info(timer, __builtin_return_address(0)); | ||
220 | } | ||
221 | |||
222 | static inline void timer_stats_timer_clear_start_info(struct timer_list *timer) | ||
223 | { | ||
224 | timer->start_site = NULL; | ||
225 | } | ||
226 | #else | ||
227 | static inline void init_timer_stats(void) | ||
228 | { | ||
229 | } | ||
230 | |||
231 | static inline void timer_stats_timer_set_start_info(struct timer_list *timer) | ||
232 | { | ||
233 | } | ||
234 | |||
235 | static inline void timer_stats_timer_clear_start_info(struct timer_list *timer) | ||
236 | { | ||
237 | } | ||
238 | #endif | ||
239 | |||
240 | extern void add_timer(struct timer_list *timer); | 195 | extern void add_timer(struct timer_list *timer); |
241 | 196 | ||
242 | extern int try_to_del_timer_sync(struct timer_list *timer); | 197 | extern int try_to_del_timer_sync(struct timer_list *timer); |
diff --git a/kernel/fork.c b/kernel/fork.c index 11c5c8ab827c..105c6676d93b 100644 --- a/kernel/fork.c +++ b/kernel/fork.c | |||
@@ -1304,6 +1304,7 @@ void __cleanup_sighand(struct sighand_struct *sighand) | |||
1304 | } | 1304 | } |
1305 | } | 1305 | } |
1306 | 1306 | ||
1307 | #ifdef CONFIG_POSIX_TIMERS | ||
1307 | /* | 1308 | /* |
1308 | * Initialize POSIX timer handling for a thread group. | 1309 | * Initialize POSIX timer handling for a thread group. |
1309 | */ | 1310 | */ |
@@ -1322,6 +1323,9 @@ static void posix_cpu_timers_init_group(struct signal_struct *sig) | |||
1322 | INIT_LIST_HEAD(&sig->cpu_timers[1]); | 1323 | INIT_LIST_HEAD(&sig->cpu_timers[1]); |
1323 | INIT_LIST_HEAD(&sig->cpu_timers[2]); | 1324 | INIT_LIST_HEAD(&sig->cpu_timers[2]); |
1324 | } | 1325 | } |
1326 | #else | ||
1327 | static inline void posix_cpu_timers_init_group(struct signal_struct *sig) { } | ||
1328 | #endif | ||
1325 | 1329 | ||
1326 | static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) | 1330 | static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) |
1327 | { | 1331 | { |
@@ -1346,11 +1350,11 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) | |||
1346 | init_waitqueue_head(&sig->wait_chldexit); | 1350 | init_waitqueue_head(&sig->wait_chldexit); |
1347 | sig->curr_target = tsk; | 1351 | sig->curr_target = tsk; |
1348 | init_sigpending(&sig->shared_pending); | 1352 | init_sigpending(&sig->shared_pending); |
1349 | INIT_LIST_HEAD(&sig->posix_timers); | ||
1350 | seqlock_init(&sig->stats_lock); | 1353 | seqlock_init(&sig->stats_lock); |
1351 | prev_cputime_init(&sig->prev_cputime); | 1354 | prev_cputime_init(&sig->prev_cputime); |
1352 | 1355 | ||
1353 | #ifdef CONFIG_POSIX_TIMERS | 1356 | #ifdef CONFIG_POSIX_TIMERS |
1357 | INIT_LIST_HEAD(&sig->posix_timers); | ||
1354 | hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); | 1358 | hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); |
1355 | sig->real_timer.function = it_real_fn; | 1359 | sig->real_timer.function = it_real_fn; |
1356 | #endif | 1360 | #endif |
@@ -1425,6 +1429,7 @@ static void rt_mutex_init_task(struct task_struct *p) | |||
1425 | #endif | 1429 | #endif |
1426 | } | 1430 | } |
1427 | 1431 | ||
1432 | #ifdef CONFIG_POSIX_TIMERS | ||
1428 | /* | 1433 | /* |
1429 | * Initialize POSIX timer handling for a single task. | 1434 | * Initialize POSIX timer handling for a single task. |
1430 | */ | 1435 | */ |
@@ -1437,6 +1442,9 @@ static void posix_cpu_timers_init(struct task_struct *tsk) | |||
1437 | INIT_LIST_HEAD(&tsk->cpu_timers[1]); | 1442 | INIT_LIST_HEAD(&tsk->cpu_timers[1]); |
1438 | INIT_LIST_HEAD(&tsk->cpu_timers[2]); | 1443 | INIT_LIST_HEAD(&tsk->cpu_timers[2]); |
1439 | } | 1444 | } |
1445 | #else | ||
1446 | static inline void posix_cpu_timers_init(struct task_struct *tsk) { } | ||
1447 | #endif | ||
1440 | 1448 | ||
1441 | static inline void | 1449 | static inline void |
1442 | init_task_pid(struct task_struct *task, enum pid_type type, struct pid *pid) | 1450 | init_task_pid(struct task_struct *task, enum pid_type type, struct pid *pid) |
diff --git a/kernel/kthread.c b/kernel/kthread.c index 2318fba86277..8461a4372e8a 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c | |||
@@ -850,7 +850,6 @@ void __kthread_queue_delayed_work(struct kthread_worker *worker, | |||
850 | 850 | ||
851 | list_add(&work->node, &worker->delayed_work_list); | 851 | list_add(&work->node, &worker->delayed_work_list); |
852 | work->worker = worker; | 852 | work->worker = worker; |
853 | timer_stats_timer_set_start_info(&dwork->timer); | ||
854 | timer->expires = jiffies + delay; | 853 | timer->expires = jiffies + delay; |
855 | add_timer(timer); | 854 | add_timer(timer); |
856 | } | 855 | } |
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 2516b8df6dbb..a688a8206727 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c | |||
@@ -2246,6 +2246,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) | |||
2246 | } | 2246 | } |
2247 | } | 2247 | } |
2248 | 2248 | ||
2249 | #ifdef CONFIG_POSIX_TIMERS | ||
2249 | static void watchdog(struct rq *rq, struct task_struct *p) | 2250 | static void watchdog(struct rq *rq, struct task_struct *p) |
2250 | { | 2251 | { |
2251 | unsigned long soft, hard; | 2252 | unsigned long soft, hard; |
@@ -2267,6 +2268,9 @@ static void watchdog(struct rq *rq, struct task_struct *p) | |||
2267 | p->cputime_expires.sched_exp = p->se.sum_exec_runtime; | 2268 | p->cputime_expires.sched_exp = p->se.sum_exec_runtime; |
2268 | } | 2269 | } |
2269 | } | 2270 | } |
2271 | #else | ||
2272 | static inline void watchdog(struct rq *rq, struct task_struct *p) { } | ||
2273 | #endif | ||
2270 | 2274 | ||
2271 | static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) | 2275 | static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) |
2272 | { | 2276 | { |
diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index 34659a853505..c69a9870ab79 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h | |||
@@ -172,18 +172,19 @@ sched_info_switch(struct rq *rq, | |||
172 | */ | 172 | */ |
173 | 173 | ||
174 | /** | 174 | /** |
175 | * cputimer_running - return true if cputimer is running | 175 | * get_running_cputimer - return &tsk->signal->cputimer if cputimer is running |
176 | * | 176 | * |
177 | * @tsk: Pointer to target task. | 177 | * @tsk: Pointer to target task. |
178 | */ | 178 | */ |
179 | static inline bool cputimer_running(struct task_struct *tsk) | 179 | #ifdef CONFIG_POSIX_TIMERS |
180 | 180 | static inline | |
181 | struct thread_group_cputimer *get_running_cputimer(struct task_struct *tsk) | ||
181 | { | 182 | { |
182 | struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; | 183 | struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; |
183 | 184 | ||
184 | /* Check if cputimer isn't running. This is accessed without locking. */ | 185 | /* Check if cputimer isn't running. This is accessed without locking. */ |
185 | if (!READ_ONCE(cputimer->running)) | 186 | if (!READ_ONCE(cputimer->running)) |
186 | return false; | 187 | return NULL; |
187 | 188 | ||
188 | /* | 189 | /* |
189 | * After we flush the task's sum_exec_runtime to sig->sum_sched_runtime | 190 | * After we flush the task's sum_exec_runtime to sig->sum_sched_runtime |
@@ -200,10 +201,17 @@ static inline bool cputimer_running(struct task_struct *tsk) | |||
200 | * clock delta is behind the expiring timer value. | 201 | * clock delta is behind the expiring timer value. |
201 | */ | 202 | */ |
202 | if (unlikely(!tsk->sighand)) | 203 | if (unlikely(!tsk->sighand)) |
203 | return false; | 204 | return NULL; |
204 | 205 | ||
205 | return true; | 206 | return cputimer; |
207 | } | ||
208 | #else | ||
209 | static inline | ||
210 | struct thread_group_cputimer *get_running_cputimer(struct task_struct *tsk) | ||
211 | { | ||
212 | return NULL; | ||
206 | } | 213 | } |
214 | #endif | ||
207 | 215 | ||
208 | /** | 216 | /** |
209 | * account_group_user_time - Maintain utime for a thread group. | 217 | * account_group_user_time - Maintain utime for a thread group. |
@@ -218,9 +226,9 @@ static inline bool cputimer_running(struct task_struct *tsk) | |||
218 | static inline void account_group_user_time(struct task_struct *tsk, | 226 | static inline void account_group_user_time(struct task_struct *tsk, |
219 | cputime_t cputime) | 227 | cputime_t cputime) |
220 | { | 228 | { |
221 | struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; | 229 | struct thread_group_cputimer *cputimer = get_running_cputimer(tsk); |
222 | 230 | ||
223 | if (!cputimer_running(tsk)) | 231 | if (!cputimer) |
224 | return; | 232 | return; |
225 | 233 | ||
226 | atomic64_add(cputime, &cputimer->cputime_atomic.utime); | 234 | atomic64_add(cputime, &cputimer->cputime_atomic.utime); |
@@ -239,9 +247,9 @@ static inline void account_group_user_time(struct task_struct *tsk, | |||
239 | static inline void account_group_system_time(struct task_struct *tsk, | 247 | static inline void account_group_system_time(struct task_struct *tsk, |
240 | cputime_t cputime) | 248 | cputime_t cputime) |
241 | { | 249 | { |
242 | struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; | 250 | struct thread_group_cputimer *cputimer = get_running_cputimer(tsk); |
243 | 251 | ||
244 | if (!cputimer_running(tsk)) | 252 | if (!cputimer) |
245 | return; | 253 | return; |
246 | 254 | ||
247 | atomic64_add(cputime, &cputimer->cputime_atomic.stime); | 255 | atomic64_add(cputime, &cputimer->cputime_atomic.stime); |
@@ -260,9 +268,9 @@ static inline void account_group_system_time(struct task_struct *tsk, | |||
260 | static inline void account_group_exec_runtime(struct task_struct *tsk, | 268 | static inline void account_group_exec_runtime(struct task_struct *tsk, |
261 | unsigned long long ns) | 269 | unsigned long long ns) |
262 | { | 270 | { |
263 | struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; | 271 | struct thread_group_cputimer *cputimer = get_running_cputimer(tsk); |
264 | 272 | ||
265 | if (!cputimer_running(tsk)) | 273 | if (!cputimer) |
266 | return; | 274 | return; |
267 | 275 | ||
268 | atomic64_add(ns, &cputimer->cputime_atomic.sum_exec_runtime); | 276 | atomic64_add(ns, &cputimer->cputime_atomic.sum_exec_runtime); |
diff --git a/kernel/time/Makefile b/kernel/time/Makefile index 976840d29a71..938dbf33ef49 100644 --- a/kernel/time/Makefile +++ b/kernel/time/Makefile | |||
@@ -15,6 +15,5 @@ ifeq ($(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST),y) | |||
15 | endif | 15 | endif |
16 | obj-$(CONFIG_GENERIC_SCHED_CLOCK) += sched_clock.o | 16 | obj-$(CONFIG_GENERIC_SCHED_CLOCK) += sched_clock.o |
17 | obj-$(CONFIG_TICK_ONESHOT) += tick-oneshot.o tick-sched.o | 17 | obj-$(CONFIG_TICK_ONESHOT) += tick-oneshot.o tick-sched.o |
18 | obj-$(CONFIG_TIMER_STATS) += timer_stats.o | ||
19 | obj-$(CONFIG_DEBUG_FS) += timekeeping_debug.o | 18 | obj-$(CONFIG_DEBUG_FS) += timekeeping_debug.o |
20 | obj-$(CONFIG_TEST_UDELAY) += test_udelay.o | 19 | obj-$(CONFIG_TEST_UDELAY) += test_udelay.o |
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index c6ecedd3b839..8e11d8d9f419 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c | |||
@@ -94,17 +94,15 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = | |||
94 | }; | 94 | }; |
95 | 95 | ||
96 | static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = { | 96 | static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = { |
97 | /* Make sure we catch unsupported clockids */ | ||
98 | [0 ... MAX_CLOCKS - 1] = HRTIMER_MAX_CLOCK_BASES, | ||
99 | |||
97 | [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME, | 100 | [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME, |
98 | [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC, | 101 | [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC, |
99 | [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME, | 102 | [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME, |
100 | [CLOCK_TAI] = HRTIMER_BASE_TAI, | 103 | [CLOCK_TAI] = HRTIMER_BASE_TAI, |
101 | }; | 104 | }; |
102 | 105 | ||
103 | static inline int hrtimer_clockid_to_base(clockid_t clock_id) | ||
104 | { | ||
105 | return hrtimer_clock_to_base_table[clock_id]; | ||
106 | } | ||
107 | |||
108 | /* | 106 | /* |
109 | * Functions and macros which are different for UP/SMP systems are kept in a | 107 | * Functions and macros which are different for UP/SMP systems are kept in a |
110 | * single place | 108 | * single place |
@@ -766,34 +764,6 @@ void hrtimers_resume(void) | |||
766 | clock_was_set_delayed(); | 764 | clock_was_set_delayed(); |
767 | } | 765 | } |
768 | 766 | ||
769 | static inline void timer_stats_hrtimer_set_start_info(struct hrtimer *timer) | ||
770 | { | ||
771 | #ifdef CONFIG_TIMER_STATS | ||
772 | if (timer->start_site) | ||
773 | return; | ||
774 | timer->start_site = __builtin_return_address(0); | ||
775 | memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); | ||
776 | timer->start_pid = current->pid; | ||
777 | #endif | ||
778 | } | ||
779 | |||
780 | static inline void timer_stats_hrtimer_clear_start_info(struct hrtimer *timer) | ||
781 | { | ||
782 | #ifdef CONFIG_TIMER_STATS | ||
783 | timer->start_site = NULL; | ||
784 | #endif | ||
785 | } | ||
786 | |||
787 | static inline void timer_stats_account_hrtimer(struct hrtimer *timer) | ||
788 | { | ||
789 | #ifdef CONFIG_TIMER_STATS | ||
790 | if (likely(!timer_stats_active)) | ||
791 | return; | ||
792 | timer_stats_update_stats(timer, timer->start_pid, timer->start_site, | ||
793 | timer->function, timer->start_comm, 0); | ||
794 | #endif | ||
795 | } | ||
796 | |||
797 | /* | 767 | /* |
798 | * Counterpart to lock_hrtimer_base above: | 768 | * Counterpart to lock_hrtimer_base above: |
799 | */ | 769 | */ |
@@ -932,7 +902,6 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest | |||
932 | * rare case and less expensive than a smp call. | 902 | * rare case and less expensive than a smp call. |
933 | */ | 903 | */ |
934 | debug_deactivate(timer); | 904 | debug_deactivate(timer); |
935 | timer_stats_hrtimer_clear_start_info(timer); | ||
936 | reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases); | 905 | reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases); |
937 | 906 | ||
938 | if (!restart) | 907 | if (!restart) |
@@ -990,8 +959,6 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, | |||
990 | /* Switch the timer base, if necessary: */ | 959 | /* Switch the timer base, if necessary: */ |
991 | new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); | 960 | new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); |
992 | 961 | ||
993 | timer_stats_hrtimer_set_start_info(timer); | ||
994 | |||
995 | leftmost = enqueue_hrtimer(timer, new_base); | 962 | leftmost = enqueue_hrtimer(timer, new_base); |
996 | if (!leftmost) | 963 | if (!leftmost) |
997 | goto unlock; | 964 | goto unlock; |
@@ -1112,6 +1079,18 @@ u64 hrtimer_get_next_event(void) | |||
1112 | } | 1079 | } |
1113 | #endif | 1080 | #endif |
1114 | 1081 | ||
1082 | static inline int hrtimer_clockid_to_base(clockid_t clock_id) | ||
1083 | { | ||
1084 | if (likely(clock_id < MAX_CLOCKS)) { | ||
1085 | int base = hrtimer_clock_to_base_table[clock_id]; | ||
1086 | |||
1087 | if (likely(base != HRTIMER_MAX_CLOCK_BASES)) | ||
1088 | return base; | ||
1089 | } | ||
1090 | WARN(1, "Invalid clockid %d. Using MONOTONIC\n", clock_id); | ||
1091 | return HRTIMER_BASE_MONOTONIC; | ||
1092 | } | ||
1093 | |||
1115 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, | 1094 | static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, |
1116 | enum hrtimer_mode mode) | 1095 | enum hrtimer_mode mode) |
1117 | { | 1096 | { |
@@ -1128,12 +1107,6 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, | |||
1128 | base = hrtimer_clockid_to_base(clock_id); | 1107 | base = hrtimer_clockid_to_base(clock_id); |
1129 | timer->base = &cpu_base->clock_base[base]; | 1108 | timer->base = &cpu_base->clock_base[base]; |
1130 | timerqueue_init(&timer->node); | 1109 | timerqueue_init(&timer->node); |
1131 | |||
1132 | #ifdef CONFIG_TIMER_STATS | ||
1133 | timer->start_site = NULL; | ||
1134 | timer->start_pid = -1; | ||
1135 | memset(timer->start_comm, 0, TASK_COMM_LEN); | ||
1136 | #endif | ||
1137 | } | 1110 | } |
1138 | 1111 | ||
1139 | /** | 1112 | /** |
@@ -1217,7 +1190,6 @@ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, | |||
1217 | raw_write_seqcount_barrier(&cpu_base->seq); | 1190 | raw_write_seqcount_barrier(&cpu_base->seq); |
1218 | 1191 | ||
1219 | __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0); | 1192 | __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0); |
1220 | timer_stats_account_hrtimer(timer); | ||
1221 | fn = timer->function; | 1193 | fn = timer->function; |
1222 | 1194 | ||
1223 | /* | 1195 | /* |
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c index 17ac99b60ee5..987e496bb51a 100644 --- a/kernel/time/tick-broadcast.c +++ b/kernel/time/tick-broadcast.c | |||
@@ -29,12 +29,13 @@ | |||
29 | */ | 29 | */ |
30 | 30 | ||
31 | static struct tick_device tick_broadcast_device; | 31 | static struct tick_device tick_broadcast_device; |
32 | static cpumask_var_t tick_broadcast_mask; | 32 | static cpumask_var_t tick_broadcast_mask __cpumask_var_read_mostly; |
33 | static cpumask_var_t tick_broadcast_on; | 33 | static cpumask_var_t tick_broadcast_on __cpumask_var_read_mostly; |
34 | static cpumask_var_t tmpmask; | 34 | static cpumask_var_t tmpmask __cpumask_var_read_mostly; |
35 | static DEFINE_RAW_SPINLOCK(tick_broadcast_lock); | ||
36 | static int tick_broadcast_forced; | 35 | static int tick_broadcast_forced; |
37 | 36 | ||
37 | static __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(tick_broadcast_lock); | ||
38 | |||
38 | #ifdef CONFIG_TICK_ONESHOT | 39 | #ifdef CONFIG_TICK_ONESHOT |
39 | static void tick_broadcast_clear_oneshot(int cpu); | 40 | static void tick_broadcast_clear_oneshot(int cpu); |
40 | static void tick_resume_broadcast_oneshot(struct clock_event_device *bc); | 41 | static void tick_resume_broadcast_oneshot(struct clock_event_device *bc); |
@@ -516,9 +517,9 @@ void tick_resume_broadcast(void) | |||
516 | 517 | ||
517 | #ifdef CONFIG_TICK_ONESHOT | 518 | #ifdef CONFIG_TICK_ONESHOT |
518 | 519 | ||
519 | static cpumask_var_t tick_broadcast_oneshot_mask; | 520 | static cpumask_var_t tick_broadcast_oneshot_mask __cpumask_var_read_mostly; |
520 | static cpumask_var_t tick_broadcast_pending_mask; | 521 | static cpumask_var_t tick_broadcast_pending_mask __cpumask_var_read_mostly; |
521 | static cpumask_var_t tick_broadcast_force_mask; | 522 | static cpumask_var_t tick_broadcast_force_mask __cpumask_var_read_mostly; |
522 | 523 | ||
523 | /* | 524 | /* |
524 | * Exposed for debugging: see timer_list.c | 525 | * Exposed for debugging: see timer_list.c |
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index db087d7e106d..95b258dd75db 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c | |||
@@ -1275,27 +1275,8 @@ error: /* even if we error out, we forwarded the time, so call update */ | |||
1275 | } | 1275 | } |
1276 | EXPORT_SYMBOL(timekeeping_inject_offset); | 1276 | EXPORT_SYMBOL(timekeeping_inject_offset); |
1277 | 1277 | ||
1278 | |||
1279 | /** | ||
1280 | * timekeeping_get_tai_offset - Returns current TAI offset from UTC | ||
1281 | * | ||
1282 | */ | ||
1283 | s32 timekeeping_get_tai_offset(void) | ||
1284 | { | ||
1285 | struct timekeeper *tk = &tk_core.timekeeper; | ||
1286 | unsigned int seq; | ||
1287 | s32 ret; | ||
1288 | |||
1289 | do { | ||
1290 | seq = read_seqcount_begin(&tk_core.seq); | ||
1291 | ret = tk->tai_offset; | ||
1292 | } while (read_seqcount_retry(&tk_core.seq, seq)); | ||
1293 | |||
1294 | return ret; | ||
1295 | } | ||
1296 | |||
1297 | /** | 1278 | /** |
1298 | * __timekeeping_set_tai_offset - Lock free worker function | 1279 | * __timekeeping_set_tai_offset - Sets the TAI offset from UTC and monotonic |
1299 | * | 1280 | * |
1300 | */ | 1281 | */ |
1301 | static void __timekeeping_set_tai_offset(struct timekeeper *tk, s32 tai_offset) | 1282 | static void __timekeeping_set_tai_offset(struct timekeeper *tk, s32 tai_offset) |
@@ -1305,24 +1286,6 @@ static void __timekeeping_set_tai_offset(struct timekeeper *tk, s32 tai_offset) | |||
1305 | } | 1286 | } |
1306 | 1287 | ||
1307 | /** | 1288 | /** |
1308 | * timekeeping_set_tai_offset - Sets the current TAI offset from UTC | ||
1309 | * | ||
1310 | */ | ||
1311 | void timekeeping_set_tai_offset(s32 tai_offset) | ||
1312 | { | ||
1313 | struct timekeeper *tk = &tk_core.timekeeper; | ||
1314 | unsigned long flags; | ||
1315 | |||
1316 | raw_spin_lock_irqsave(&timekeeper_lock, flags); | ||
1317 | write_seqcount_begin(&tk_core.seq); | ||
1318 | __timekeeping_set_tai_offset(tk, tai_offset); | ||
1319 | timekeeping_update(tk, TK_MIRROR | TK_CLOCK_WAS_SET); | ||
1320 | write_seqcount_end(&tk_core.seq); | ||
1321 | raw_spin_unlock_irqrestore(&timekeeper_lock, flags); | ||
1322 | clock_was_set(); | ||
1323 | } | ||
1324 | |||
1325 | /** | ||
1326 | * change_clocksource - Swaps clocksources if a new one is available | 1289 | * change_clocksource - Swaps clocksources if a new one is available |
1327 | * | 1290 | * |
1328 | * Accumulates current time interval and initializes new clocksource | 1291 | * Accumulates current time interval and initializes new clocksource |
diff --git a/kernel/time/timekeeping.h b/kernel/time/timekeeping.h index 704f595ce83f..d0914676d4c5 100644 --- a/kernel/time/timekeeping.h +++ b/kernel/time/timekeeping.h | |||
@@ -11,8 +11,6 @@ extern ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq, | |||
11 | extern int timekeeping_valid_for_hres(void); | 11 | extern int timekeeping_valid_for_hres(void); |
12 | extern u64 timekeeping_max_deferment(void); | 12 | extern u64 timekeeping_max_deferment(void); |
13 | extern int timekeeping_inject_offset(struct timespec *ts); | 13 | extern int timekeeping_inject_offset(struct timespec *ts); |
14 | extern s32 timekeeping_get_tai_offset(void); | ||
15 | extern void timekeeping_set_tai_offset(s32 tai_offset); | ||
16 | extern int timekeeping_suspend(void); | 14 | extern int timekeeping_suspend(void); |
17 | extern void timekeeping_resume(void); | 15 | extern void timekeeping_resume(void); |
18 | 16 | ||
diff --git a/kernel/time/timer.c b/kernel/time/timer.c index ec33a6933eae..82a6bfa0c307 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c | |||
@@ -571,38 +571,6 @@ internal_add_timer(struct timer_base *base, struct timer_list *timer) | |||
571 | trigger_dyntick_cpu(base, timer); | 571 | trigger_dyntick_cpu(base, timer); |
572 | } | 572 | } |
573 | 573 | ||
574 | #ifdef CONFIG_TIMER_STATS | ||
575 | void __timer_stats_timer_set_start_info(struct timer_list *timer, void *addr) | ||
576 | { | ||
577 | if (timer->start_site) | ||
578 | return; | ||
579 | |||
580 | timer->start_site = addr; | ||
581 | memcpy(timer->start_comm, current->comm, TASK_COMM_LEN); | ||
582 | timer->start_pid = current->pid; | ||
583 | } | ||
584 | |||
585 | static void timer_stats_account_timer(struct timer_list *timer) | ||
586 | { | ||
587 | void *site; | ||
588 | |||
589 | /* | ||
590 | * start_site can be concurrently reset by | ||
591 | * timer_stats_timer_clear_start_info() | ||
592 | */ | ||
593 | site = READ_ONCE(timer->start_site); | ||
594 | if (likely(!site)) | ||
595 | return; | ||
596 | |||
597 | timer_stats_update_stats(timer, timer->start_pid, site, | ||
598 | timer->function, timer->start_comm, | ||
599 | timer->flags); | ||
600 | } | ||
601 | |||
602 | #else | ||
603 | static void timer_stats_account_timer(struct timer_list *timer) {} | ||
604 | #endif | ||
605 | |||
606 | #ifdef CONFIG_DEBUG_OBJECTS_TIMERS | 574 | #ifdef CONFIG_DEBUG_OBJECTS_TIMERS |
607 | 575 | ||
608 | static struct debug_obj_descr timer_debug_descr; | 576 | static struct debug_obj_descr timer_debug_descr; |
@@ -789,11 +757,6 @@ static void do_init_timer(struct timer_list *timer, unsigned int flags, | |||
789 | { | 757 | { |
790 | timer->entry.pprev = NULL; | 758 | timer->entry.pprev = NULL; |
791 | timer->flags = flags | raw_smp_processor_id(); | 759 | timer->flags = flags | raw_smp_processor_id(); |
792 | #ifdef CONFIG_TIMER_STATS | ||
793 | timer->start_site = NULL; | ||
794 | timer->start_pid = -1; | ||
795 | memset(timer->start_comm, 0, TASK_COMM_LEN); | ||
796 | #endif | ||
797 | lockdep_init_map(&timer->lockdep_map, name, key, 0); | 760 | lockdep_init_map(&timer->lockdep_map, name, key, 0); |
798 | } | 761 | } |
799 | 762 | ||
@@ -1001,8 +964,6 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only) | |||
1001 | base = lock_timer_base(timer, &flags); | 964 | base = lock_timer_base(timer, &flags); |
1002 | } | 965 | } |
1003 | 966 | ||
1004 | timer_stats_timer_set_start_info(timer); | ||
1005 | |||
1006 | ret = detach_if_pending(timer, base, false); | 967 | ret = detach_if_pending(timer, base, false); |
1007 | if (!ret && pending_only) | 968 | if (!ret && pending_only) |
1008 | goto out_unlock; | 969 | goto out_unlock; |
@@ -1130,7 +1091,6 @@ void add_timer_on(struct timer_list *timer, int cpu) | |||
1130 | struct timer_base *new_base, *base; | 1091 | struct timer_base *new_base, *base; |
1131 | unsigned long flags; | 1092 | unsigned long flags; |
1132 | 1093 | ||
1133 | timer_stats_timer_set_start_info(timer); | ||
1134 | BUG_ON(timer_pending(timer) || !timer->function); | 1094 | BUG_ON(timer_pending(timer) || !timer->function); |
1135 | 1095 | ||
1136 | new_base = get_timer_cpu_base(timer->flags, cpu); | 1096 | new_base = get_timer_cpu_base(timer->flags, cpu); |
@@ -1176,7 +1136,6 @@ int del_timer(struct timer_list *timer) | |||
1176 | 1136 | ||
1177 | debug_assert_init(timer); | 1137 | debug_assert_init(timer); |
1178 | 1138 | ||
1179 | timer_stats_timer_clear_start_info(timer); | ||
1180 | if (timer_pending(timer)) { | 1139 | if (timer_pending(timer)) { |
1181 | base = lock_timer_base(timer, &flags); | 1140 | base = lock_timer_base(timer, &flags); |
1182 | ret = detach_if_pending(timer, base, true); | 1141 | ret = detach_if_pending(timer, base, true); |
@@ -1204,10 +1163,9 @@ int try_to_del_timer_sync(struct timer_list *timer) | |||
1204 | 1163 | ||
1205 | base = lock_timer_base(timer, &flags); | 1164 | base = lock_timer_base(timer, &flags); |
1206 | 1165 | ||
1207 | if (base->running_timer != timer) { | 1166 | if (base->running_timer != timer) |
1208 | timer_stats_timer_clear_start_info(timer); | ||
1209 | ret = detach_if_pending(timer, base, true); | 1167 | ret = detach_if_pending(timer, base, true); |
1210 | } | 1168 | |
1211 | spin_unlock_irqrestore(&base->lock, flags); | 1169 | spin_unlock_irqrestore(&base->lock, flags); |
1212 | 1170 | ||
1213 | return ret; | 1171 | return ret; |
@@ -1331,7 +1289,6 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) | |||
1331 | unsigned long data; | 1289 | unsigned long data; |
1332 | 1290 | ||
1333 | timer = hlist_entry(head->first, struct timer_list, entry); | 1291 | timer = hlist_entry(head->first, struct timer_list, entry); |
1334 | timer_stats_account_timer(timer); | ||
1335 | 1292 | ||
1336 | base->running_timer = timer; | 1293 | base->running_timer = timer; |
1337 | detach_timer(timer, true); | 1294 | detach_timer(timer, true); |
@@ -1868,7 +1825,6 @@ static void __init init_timer_cpus(void) | |||
1868 | void __init init_timers(void) | 1825 | void __init init_timers(void) |
1869 | { | 1826 | { |
1870 | init_timer_cpus(); | 1827 | init_timer_cpus(); |
1871 | init_timer_stats(); | ||
1872 | open_softirq(TIMER_SOFTIRQ, run_timer_softirq); | 1828 | open_softirq(TIMER_SOFTIRQ, run_timer_softirq); |
1873 | } | 1829 | } |
1874 | 1830 | ||
diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c index afe6cd1944fc..ff8d5c13d04b 100644 --- a/kernel/time/timer_list.c +++ b/kernel/time/timer_list.c | |||
@@ -62,21 +62,11 @@ static void | |||
62 | print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer, | 62 | print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer, |
63 | int idx, u64 now) | 63 | int idx, u64 now) |
64 | { | 64 | { |
65 | #ifdef CONFIG_TIMER_STATS | ||
66 | char tmp[TASK_COMM_LEN + 1]; | ||
67 | #endif | ||
68 | SEQ_printf(m, " #%d: ", idx); | 65 | SEQ_printf(m, " #%d: ", idx); |
69 | print_name_offset(m, taddr); | 66 | print_name_offset(m, taddr); |
70 | SEQ_printf(m, ", "); | 67 | SEQ_printf(m, ", "); |
71 | print_name_offset(m, timer->function); | 68 | print_name_offset(m, timer->function); |
72 | SEQ_printf(m, ", S:%02x", timer->state); | 69 | SEQ_printf(m, ", S:%02x", timer->state); |
73 | #ifdef CONFIG_TIMER_STATS | ||
74 | SEQ_printf(m, ", "); | ||
75 | print_name_offset(m, timer->start_site); | ||
76 | memcpy(tmp, timer->start_comm, TASK_COMM_LEN); | ||
77 | tmp[TASK_COMM_LEN] = 0; | ||
78 | SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); | ||
79 | #endif | ||
80 | SEQ_printf(m, "\n"); | 70 | SEQ_printf(m, "\n"); |
81 | SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n", | 71 | SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n", |
82 | (unsigned long long)ktime_to_ns(hrtimer_get_softexpires(timer)), | 72 | (unsigned long long)ktime_to_ns(hrtimer_get_softexpires(timer)), |
@@ -127,7 +117,7 @@ print_base(struct seq_file *m, struct hrtimer_clock_base *base, u64 now) | |||
127 | SEQ_printf(m, " .base: %pK\n", base); | 117 | SEQ_printf(m, " .base: %pK\n", base); |
128 | SEQ_printf(m, " .index: %d\n", base->index); | 118 | SEQ_printf(m, " .index: %d\n", base->index); |
129 | 119 | ||
130 | SEQ_printf(m, " .resolution: %u nsecs\n", (unsigned) hrtimer_resolution); | 120 | SEQ_printf(m, " .resolution: %u nsecs\n", hrtimer_resolution); |
131 | 121 | ||
132 | SEQ_printf(m, " .get_time: "); | 122 | SEQ_printf(m, " .get_time: "); |
133 | print_name_offset(m, base->get_time); | 123 | print_name_offset(m, base->get_time); |
diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c deleted file mode 100644 index afddded947df..000000000000 --- a/kernel/time/timer_stats.c +++ /dev/null | |||
@@ -1,425 +0,0 @@ | |||
1 | /* | ||
2 | * kernel/time/timer_stats.c | ||
3 | * | ||
4 | * Collect timer usage statistics. | ||
5 | * | ||
6 | * Copyright(C) 2006, Red Hat, Inc., Ingo Molnar | ||
7 | * Copyright(C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com> | ||
8 | * | ||
9 | * timer_stats is based on timer_top, a similar functionality which was part of | ||
10 | * Con Kolivas dyntick patch set. It was developed by Daniel Petrini at the | ||
11 | * Instituto Nokia de Tecnologia - INdT - Manaus. timer_top's design was based | ||
12 | * on dynamic allocation of the statistics entries and linear search based | ||
13 | * lookup combined with a global lock, rather than the static array, hash | ||
14 | * and per-CPU locking which is used by timer_stats. It was written for the | ||
15 | * pre hrtimer kernel code and therefore did not take hrtimers into account. | ||
16 | * Nevertheless it provided the base for the timer_stats implementation and | ||
17 | * was a helpful source of inspiration. Kudos to Daniel and the Nokia folks | ||
18 | * for this effort. | ||
19 | * | ||
20 | * timer_top.c is | ||
21 | * Copyright (C) 2005 Instituto Nokia de Tecnologia - INdT - Manaus | ||
22 | * Written by Daniel Petrini <d.pensator@gmail.com> | ||
23 | * timer_top.c was released under the GNU General Public License version 2 | ||
24 | * | ||
25 | * We export the addresses and counting of timer functions being called, | ||
26 | * the pid and cmdline from the owner process if applicable. | ||
27 | * | ||
28 | * Start/stop data collection: | ||
29 | * # echo [1|0] >/proc/timer_stats | ||
30 | * | ||
31 | * Display the information collected so far: | ||
32 | * # cat /proc/timer_stats | ||
33 | * | ||
34 | * This program is free software; you can redistribute it and/or modify | ||
35 | * it under the terms of the GNU General Public License version 2 as | ||
36 | * published by the Free Software Foundation. | ||
37 | */ | ||
38 | |||
39 | #include <linux/proc_fs.h> | ||
40 | #include <linux/module.h> | ||
41 | #include <linux/spinlock.h> | ||
42 | #include <linux/sched.h> | ||
43 | #include <linux/seq_file.h> | ||
44 | #include <linux/kallsyms.h> | ||
45 | |||
46 | #include <linux/uaccess.h> | ||
47 | |||
48 | /* | ||
49 | * This is our basic unit of interest: a timer expiry event identified | ||
50 | * by the timer, its start/expire functions and the PID of the task that | ||
51 | * started the timer. We count the number of times an event happens: | ||
52 | */ | ||
53 | struct entry { | ||
54 | /* | ||
55 | * Hash list: | ||
56 | */ | ||
57 | struct entry *next; | ||
58 | |||
59 | /* | ||
60 | * Hash keys: | ||
61 | */ | ||
62 | void *timer; | ||
63 | void *start_func; | ||
64 | void *expire_func; | ||
65 | pid_t pid; | ||
66 | |||
67 | /* | ||
68 | * Number of timeout events: | ||
69 | */ | ||
70 | unsigned long count; | ||
71 | u32 flags; | ||
72 | |||
73 | /* | ||
74 | * We save the command-line string to preserve | ||
75 | * this information past task exit: | ||
76 | */ | ||
77 | char comm[TASK_COMM_LEN + 1]; | ||
78 | |||
79 | } ____cacheline_aligned_in_smp; | ||
80 | |||
81 | /* | ||
82 | * Spinlock protecting the tables - not taken during lookup: | ||
83 | */ | ||
84 | static DEFINE_RAW_SPINLOCK(table_lock); | ||
85 | |||
86 | /* | ||
87 | * Per-CPU lookup locks for fast hash lookup: | ||
88 | */ | ||
89 | static DEFINE_PER_CPU(raw_spinlock_t, tstats_lookup_lock); | ||
90 | |||
91 | /* | ||
92 | * Mutex to serialize state changes with show-stats activities: | ||
93 | */ | ||
94 | static DEFINE_MUTEX(show_mutex); | ||
95 | |||
96 | /* | ||
97 | * Collection status, active/inactive: | ||
98 | */ | ||
99 | int __read_mostly timer_stats_active; | ||
100 | |||
101 | /* | ||
102 | * Beginning/end timestamps of measurement: | ||
103 | */ | ||
104 | static ktime_t time_start, time_stop; | ||
105 | |||
106 | /* | ||
107 | * tstat entry structs only get allocated while collection is | ||
108 | * active and never freed during that time - this simplifies | ||
109 | * things quite a bit. | ||
110 | * | ||
111 | * They get freed when a new collection period is started. | ||
112 | */ | ||
113 | #define MAX_ENTRIES_BITS 10 | ||
114 | #define MAX_ENTRIES (1UL << MAX_ENTRIES_BITS) | ||
115 | |||
116 | static unsigned long nr_entries; | ||
117 | static struct entry entries[MAX_ENTRIES]; | ||
118 | |||
119 | static atomic_t overflow_count; | ||
120 | |||
121 | /* | ||
122 | * The entries are in a hash-table, for fast lookup: | ||
123 | */ | ||
124 | #define TSTAT_HASH_BITS (MAX_ENTRIES_BITS - 1) | ||
125 | #define TSTAT_HASH_SIZE (1UL << TSTAT_HASH_BITS) | ||
126 | #define TSTAT_HASH_MASK (TSTAT_HASH_SIZE - 1) | ||
127 | |||
128 | #define __tstat_hashfn(entry) \ | ||
129 | (((unsigned long)(entry)->timer ^ \ | ||
130 | (unsigned long)(entry)->start_func ^ \ | ||
131 | (unsigned long)(entry)->expire_func ^ \ | ||
132 | (unsigned long)(entry)->pid ) & TSTAT_HASH_MASK) | ||
133 | |||
134 | #define tstat_hashentry(entry) (tstat_hash_table + __tstat_hashfn(entry)) | ||
135 | |||
136 | static struct entry *tstat_hash_table[TSTAT_HASH_SIZE] __read_mostly; | ||
137 | |||
138 | static void reset_entries(void) | ||
139 | { | ||
140 | nr_entries = 0; | ||
141 | memset(entries, 0, sizeof(entries)); | ||
142 | memset(tstat_hash_table, 0, sizeof(tstat_hash_table)); | ||
143 | atomic_set(&overflow_count, 0); | ||
144 | } | ||
145 | |||
146 | static struct entry *alloc_entry(void) | ||
147 | { | ||
148 | if (nr_entries >= MAX_ENTRIES) | ||
149 | return NULL; | ||
150 | |||
151 | return entries + nr_entries++; | ||
152 | } | ||
153 | |||
154 | static int match_entries(struct entry *entry1, struct entry *entry2) | ||
155 | { | ||
156 | return entry1->timer == entry2->timer && | ||
157 | entry1->start_func == entry2->start_func && | ||
158 | entry1->expire_func == entry2->expire_func && | ||
159 | entry1->pid == entry2->pid; | ||
160 | } | ||
161 | |||
162 | /* | ||
163 | * Look up whether an entry matching this item is present | ||
164 | * in the hash already. Must be called with irqs off and the | ||
165 | * lookup lock held: | ||
166 | */ | ||
167 | static struct entry *tstat_lookup(struct entry *entry, char *comm) | ||
168 | { | ||
169 | struct entry **head, *curr, *prev; | ||
170 | |||
171 | head = tstat_hashentry(entry); | ||
172 | curr = *head; | ||
173 | |||
174 | /* | ||
175 | * The fastpath is when the entry is already hashed, | ||
176 | * we do this with the lookup lock held, but with the | ||
177 | * table lock not held: | ||
178 | */ | ||
179 | while (curr) { | ||
180 | if (match_entries(curr, entry)) | ||
181 | return curr; | ||
182 | |||
183 | curr = curr->next; | ||
184 | } | ||
185 | /* | ||
186 | * Slowpath: allocate, set up and link a new hash entry: | ||
187 | */ | ||
188 | prev = NULL; | ||
189 | curr = *head; | ||
190 | |||
191 | raw_spin_lock(&table_lock); | ||
192 | /* | ||
193 | * Make sure we have not raced with another CPU: | ||
194 | */ | ||
195 | while (curr) { | ||
196 | if (match_entries(curr, entry)) | ||
197 | goto out_unlock; | ||
198 | |||
199 | prev = curr; | ||
200 | curr = curr->next; | ||
201 | } | ||
202 | |||
203 | curr = alloc_entry(); | ||
204 | if (curr) { | ||
205 | *curr = *entry; | ||
206 | curr->count = 0; | ||
207 | curr->next = NULL; | ||
208 | memcpy(curr->comm, comm, TASK_COMM_LEN); | ||
209 | |||
210 | smp_mb(); /* Ensure that curr is initialized before insert */ | ||
211 | |||
212 | if (prev) | ||
213 | prev->next = curr; | ||
214 | else | ||
215 | *head = curr; | ||
216 | } | ||
217 | out_unlock: | ||
218 | raw_spin_unlock(&table_lock); | ||
219 | |||
220 | return curr; | ||
221 | } | ||
222 | |||
223 | /** | ||
224 | * timer_stats_update_stats - Update the statistics for a timer. | ||
225 | * @timer: pointer to either a timer_list or a hrtimer | ||
226 | * @pid: the pid of the task which set up the timer | ||
227 | * @startf: pointer to the function which did the timer setup | ||
228 | * @timerf: pointer to the timer callback function of the timer | ||
229 | * @comm: name of the process which set up the timer | ||
230 | * @tflags: The flags field of the timer | ||
231 | * | ||
232 | * When the timer is already registered, then the event counter is | ||
233 | * incremented. Otherwise the timer is registered in a free slot. | ||
234 | */ | ||
235 | void timer_stats_update_stats(void *timer, pid_t pid, void *startf, | ||
236 | void *timerf, char *comm, u32 tflags) | ||
237 | { | ||
238 | /* | ||
239 | * It doesn't matter which lock we take: | ||
240 | */ | ||
241 | raw_spinlock_t *lock; | ||
242 | struct entry *entry, input; | ||
243 | unsigned long flags; | ||
244 | |||
245 | if (likely(!timer_stats_active)) | ||
246 | return; | ||
247 | |||
248 | lock = &per_cpu(tstats_lookup_lock, raw_smp_processor_id()); | ||
249 | |||
250 | input.timer = timer; | ||
251 | input.start_func = startf; | ||
252 | input.expire_func = timerf; | ||
253 | input.pid = pid; | ||
254 | input.flags = tflags; | ||
255 | |||
256 | raw_spin_lock_irqsave(lock, flags); | ||
257 | if (!timer_stats_active) | ||
258 | goto out_unlock; | ||
259 | |||
260 | entry = tstat_lookup(&input, comm); | ||
261 | if (likely(entry)) | ||
262 | entry->count++; | ||
263 | else | ||
264 | atomic_inc(&overflow_count); | ||
265 | |||
266 | out_unlock: | ||
267 | raw_spin_unlock_irqrestore(lock, flags); | ||
268 | } | ||
269 | |||
270 | static void print_name_offset(struct seq_file *m, unsigned long addr) | ||
271 | { | ||
272 | char symname[KSYM_NAME_LEN]; | ||
273 | |||
274 | if (lookup_symbol_name(addr, symname) < 0) | ||
275 | seq_printf(m, "<%p>", (void *)addr); | ||
276 | else | ||
277 | seq_printf(m, "%s", symname); | ||
278 | } | ||
279 | |||
280 | static int tstats_show(struct seq_file *m, void *v) | ||
281 | { | ||
282 | struct timespec64 period; | ||
283 | struct entry *entry; | ||
284 | unsigned long ms; | ||
285 | long events = 0; | ||
286 | ktime_t time; | ||
287 | int i; | ||
288 | |||
289 | mutex_lock(&show_mutex); | ||
290 | /* | ||
291 | * If still active then calculate up to now: | ||
292 | */ | ||
293 | if (timer_stats_active) | ||
294 | time_stop = ktime_get(); | ||
295 | |||
296 | time = ktime_sub(time_stop, time_start); | ||
297 | |||
298 | period = ktime_to_timespec64(time); | ||
299 | ms = period.tv_nsec / 1000000; | ||
300 | |||
301 | seq_puts(m, "Timer Stats Version: v0.3\n"); | ||
302 | seq_printf(m, "Sample period: %ld.%03ld s\n", (long)period.tv_sec, ms); | ||
303 | if (atomic_read(&overflow_count)) | ||
304 | seq_printf(m, "Overflow: %d entries\n", atomic_read(&overflow_count)); | ||
305 | seq_printf(m, "Collection: %s\n", timer_stats_active ? "active" : "inactive"); | ||
306 | |||
307 | for (i = 0; i < nr_entries; i++) { | ||
308 | entry = entries + i; | ||
309 | if (entry->flags & TIMER_DEFERRABLE) { | ||
310 | seq_printf(m, "%4luD, %5d %-16s ", | ||
311 | entry->count, entry->pid, entry->comm); | ||
312 | } else { | ||
313 | seq_printf(m, " %4lu, %5d %-16s ", | ||
314 | entry->count, entry->pid, entry->comm); | ||
315 | } | ||
316 | |||
317 | print_name_offset(m, (unsigned long)entry->start_func); | ||
318 | seq_puts(m, " ("); | ||
319 | print_name_offset(m, (unsigned long)entry->expire_func); | ||
320 | seq_puts(m, ")\n"); | ||
321 | |||
322 | events += entry->count; | ||
323 | } | ||
324 | |||
325 | ms += period.tv_sec * 1000; | ||
326 | if (!ms) | ||
327 | ms = 1; | ||
328 | |||
329 | if (events && period.tv_sec) | ||
330 | seq_printf(m, "%ld total events, %ld.%03ld events/sec\n", | ||
331 | events, events * 1000 / ms, | ||
332 | (events * 1000000 / ms) % 1000); | ||
333 | else | ||
334 | seq_printf(m, "%ld total events\n", events); | ||
335 | |||
336 | mutex_unlock(&show_mutex); | ||
337 | |||
338 | return 0; | ||
339 | } | ||
340 | |||
341 | /* | ||
342 | * After a state change, make sure all concurrent lookup/update | ||
343 | * activities have stopped: | ||
344 | */ | ||
345 | static void sync_access(void) | ||
346 | { | ||
347 | unsigned long flags; | ||
348 | int cpu; | ||
349 | |||
350 | for_each_online_cpu(cpu) { | ||
351 | raw_spinlock_t *lock = &per_cpu(tstats_lookup_lock, cpu); | ||
352 | |||
353 | raw_spin_lock_irqsave(lock, flags); | ||
354 | /* nothing */ | ||
355 | raw_spin_unlock_irqrestore(lock, flags); | ||
356 | } | ||
357 | } | ||
358 | |||
359 | static ssize_t tstats_write(struct file *file, const char __user *buf, | ||
360 | size_t count, loff_t *offs) | ||
361 | { | ||
362 | char ctl[2]; | ||
363 | |||
364 | if (count != 2 || *offs) | ||
365 | return -EINVAL; | ||
366 | |||
367 | if (copy_from_user(ctl, buf, count)) | ||
368 | return -EFAULT; | ||
369 | |||
370 | mutex_lock(&show_mutex); | ||
371 | switch (ctl[0]) { | ||
372 | case '0': | ||
373 | if (timer_stats_active) { | ||
374 | timer_stats_active = 0; | ||
375 | time_stop = ktime_get(); | ||
376 | sync_access(); | ||
377 | } | ||
378 | break; | ||
379 | case '1': | ||
380 | if (!timer_stats_active) { | ||
381 | reset_entries(); | ||
382 | time_start = ktime_get(); | ||
383 | smp_mb(); | ||
384 | timer_stats_active = 1; | ||
385 | } | ||
386 | break; | ||
387 | default: | ||
388 | count = -EINVAL; | ||
389 | } | ||
390 | mutex_unlock(&show_mutex); | ||
391 | |||
392 | return count; | ||
393 | } | ||
394 | |||
395 | static int tstats_open(struct inode *inode, struct file *filp) | ||
396 | { | ||
397 | return single_open(filp, tstats_show, NULL); | ||
398 | } | ||
399 | |||
400 | static const struct file_operations tstats_fops = { | ||
401 | .open = tstats_open, | ||
402 | .read = seq_read, | ||
403 | .write = tstats_write, | ||
404 | .llseek = seq_lseek, | ||
405 | .release = single_release, | ||
406 | }; | ||
407 | |||
408 | void __init init_timer_stats(void) | ||
409 | { | ||
410 | int cpu; | ||
411 | |||
412 | for_each_possible_cpu(cpu) | ||
413 | raw_spin_lock_init(&per_cpu(tstats_lookup_lock, cpu)); | ||
414 | } | ||
415 | |||
416 | static int __init init_tstats_procfs(void) | ||
417 | { | ||
418 | struct proc_dir_entry *pe; | ||
419 | |||
420 | pe = proc_create("timer_stats", 0644, NULL, &tstats_fops); | ||
421 | if (!pe) | ||
422 | return -ENOMEM; | ||
423 | return 0; | ||
424 | } | ||
425 | __initcall(init_tstats_procfs); | ||
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 1d9fb6543a66..072cbc9b175d 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c | |||
@@ -1523,8 +1523,6 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq, | |||
1523 | return; | 1523 | return; |
1524 | } | 1524 | } |
1525 | 1525 | ||
1526 | timer_stats_timer_set_start_info(&dwork->timer); | ||
1527 | |||
1528 | dwork->wq = wq; | 1526 | dwork->wq = wq; |
1529 | dwork->cpu = cpu; | 1527 | dwork->cpu = cpu; |
1530 | timer->expires = jiffies + delay; | 1528 | timer->expires = jiffies + delay; |
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index eb9e9a7870fa..132af338d6dd 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug | |||
@@ -980,20 +980,6 @@ config DEBUG_TIMEKEEPING | |||
980 | 980 | ||
981 | If unsure, say N. | 981 | If unsure, say N. |
982 | 982 | ||
983 | config TIMER_STATS | ||
984 | bool "Collect kernel timers statistics" | ||
985 | depends on DEBUG_KERNEL && PROC_FS | ||
986 | help | ||
987 | If you say Y here, additional code will be inserted into the | ||
988 | timer routines to collect statistics about kernel timers being | ||
989 | reprogrammed. The statistics can be read from /proc/timer_stats. | ||
990 | The statistics collection is started by writing 1 to /proc/timer_stats, | ||
991 | writing 0 stops it. This feature is useful to collect information | ||
992 | about timer usage patterns in kernel and userspace. This feature | ||
993 | is lightweight if enabled in the kernel config but not activated | ||
994 | (it defaults to deactivated on bootup and will only be activated | ||
995 | if some application like powertop activates it explicitly). | ||
996 | |||
997 | config DEBUG_PREEMPT | 983 | config DEBUG_PREEMPT |
998 | bool "Debug preemptible kernel" | 984 | bool "Debug preemptible kernel" |
999 | depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT | 985 | depends on DEBUG_KERNEL && PREEMPT && TRACE_IRQFLAGS_SUPPORT |
diff --git a/lib/timerqueue.c b/lib/timerqueue.c index adc6ee0a5126..4a720ed4fdaf 100644 --- a/lib/timerqueue.c +++ b/lib/timerqueue.c | |||
@@ -80,8 +80,7 @@ bool timerqueue_del(struct timerqueue_head *head, struct timerqueue_node *node) | |||
80 | if (head->next == node) { | 80 | if (head->next == node) { |
81 | struct rb_node *rbn = rb_next(&node->node); | 81 | struct rb_node *rbn = rb_next(&node->node); |
82 | 82 | ||
83 | head->next = rbn ? | 83 | head->next = rb_entry_safe(rbn, struct timerqueue_node, node); |
84 | rb_entry(rbn, struct timerqueue_node, node) : NULL; | ||
85 | } | 84 | } |
86 | rb_erase(&node->node, &head->head); | 85 | rb_erase(&node->node, &head->head); |
87 | RB_CLEAR_NODE(&node->node); | 86 | RB_CLEAR_NODE(&node->node); |