diff options
Diffstat (limited to 'Documentation/ftrace.txt')
-rw-r--r-- | Documentation/ftrace.txt | 1360 |
1 files changed, 1360 insertions, 0 deletions
diff --git a/Documentation/ftrace.txt b/Documentation/ftrace.txt new file mode 100644 index 000000000000..f218f616ff6b --- /dev/null +++ b/Documentation/ftrace.txt | |||
@@ -0,0 +1,1360 @@ | |||
1 | ftrace - Function Tracer | ||
2 | ======================== | ||
3 | |||
4 | Copyright 2008 Red Hat Inc. | ||
5 | Author: Steven Rostedt <srostedt@redhat.com> | ||
6 | License: The GNU Free Documentation License, Version 1.2 | ||
7 | Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton, | ||
8 | John Kacur, and David Teigland. | ||
9 | |||
10 | Written for: 2.6.27-rc1 | ||
11 | |||
12 | Introduction | ||
13 | ------------ | ||
14 | |||
15 | Ftrace is an internal tracer designed to help out developers and | ||
16 | designers of systems to find what is going on inside the kernel. | ||
17 | It can be used for debugging or analyzing latencies and performance | ||
18 | issues that take place outside of user-space. | ||
19 | |||
20 | Although ftrace is the function tracer, it also includes an | ||
21 | infrastructure that allows for other types of tracing. Some of the | ||
22 | tracers that are currently in ftrace include a tracer to trace | ||
23 | context switches, the time it takes for a high priority task to | ||
24 | run after it was woken up, the time interrupts are disabled, and | ||
25 | more (ftrace allows for tracer plugins, which means that the list of | ||
26 | tracers can always grow). | ||
27 | |||
28 | |||
29 | The File System | ||
30 | --------------- | ||
31 | |||
32 | Ftrace uses the debugfs file system to hold the control files as well | ||
33 | as the files to display output. | ||
34 | |||
35 | To mount the debugfs system: | ||
36 | |||
37 | # mkdir /debug | ||
38 | # mount -t debugfs nodev /debug | ||
39 | |||
40 | (Note: it is more common to mount at /sys/kernel/debug, but for simplicity | ||
41 | this document will use /debug) | ||
42 | |||
43 | That's it! (assuming that you have ftrace configured into your kernel) | ||
44 | |||
45 | After mounting the debugfs, you can see a directory called | ||
46 | "tracing". This directory contains the control and output files | ||
47 | of ftrace. Here is a list of some of the key files: | ||
48 | |||
49 | |||
50 | Note: all time values are in microseconds. | ||
51 | |||
52 | current_tracer : This is used to set or display the current tracer | ||
53 | that is configured. | ||
54 | |||
55 | available_tracers : This holds the different types of tracers that | ||
56 | have been compiled into the kernel. The tracers | ||
57 | listed here can be configured by echoing their name | ||
58 | into current_tracer. | ||
59 | |||
60 | tracing_enabled : This sets or displays whether the current_tracer | ||
61 | is activated and tracing or not. Echo 0 into this | ||
62 | file to disable the tracer or 1 to enable it. | ||
63 | |||
64 | trace : This file holds the output of the trace in a human readable | ||
65 | format (described below). | ||
66 | |||
67 | latency_trace : This file shows the same trace but the information | ||
68 | is organized more to display possible latencies | ||
69 | in the system (described below). | ||
70 | |||
71 | trace_pipe : The output is the same as the "trace" file but this | ||
72 | file is meant to be streamed with live tracing. | ||
73 | Reads from this file will block until new data | ||
74 | is retrieved. Unlike the "trace" and "latency_trace" | ||
75 | files, this file is a consumer. This means reading | ||
76 | from this file causes sequential reads to display | ||
77 | more current data. Once data is read from this | ||
78 | file, it is consumed, and will not be read | ||
79 | again with a sequential read. The "trace" and | ||
80 | "latency_trace" files are static, and if the | ||
81 | tracer is not adding more data, they will display | ||
82 | the same information every time they are read. | ||
83 | |||
84 | iter_ctrl : This file lets the user control the amount of data | ||
85 | that is displayed in one of the above output | ||
86 | files. | ||
87 | |||
88 | trace_max_latency : Some of the tracers record the max latency. | ||
89 | For example, the time interrupts are disabled. | ||
90 | This time is saved in this file. The max trace | ||
91 | will also be stored, and displayed by either | ||
92 | "trace" or "latency_trace". A new max trace will | ||
93 | only be recorded if the latency is greater than | ||
94 | the value in this file. (in microseconds) | ||
95 | |||
96 | trace_entries : This sets or displays the number of trace | ||
97 | entries each CPU buffer can hold. The tracer buffers | ||
98 | are the same size for each CPU. The displayed number | ||
99 | is the size of the CPU buffer and not total size. The | ||
100 | trace buffers are allocated in pages (blocks of memory | ||
101 | that the kernel uses for allocation, usually 4 KB in size). | ||
102 | Since each entry is smaller than a page, if the last | ||
103 | allocated page has room for more entries than were | ||
104 | requested, the rest of the page is used to allocate | ||
105 | entries. | ||
106 | |||
107 | This can only be updated when the current_tracer | ||
108 | is set to "none". | ||
109 | |||
110 | NOTE: It is planned on changing the allocated buffers | ||
111 | from being the number of possible CPUS to | ||
112 | the number of online CPUS. | ||
113 | |||
114 | tracing_cpumask : This is a mask that lets the user only trace | ||
115 | on specified CPUS. The format is a hex string | ||
116 | representing the CPUS. | ||
117 | |||
118 | set_ftrace_filter : When dynamic ftrace is configured in (see the | ||
119 | section below "dynamic ftrace"), the code is dynamically | ||
120 | modified (code text rewrite) to disable calling of the | ||
121 | function profiler (mcount). This lets tracing be configured | ||
122 | in with practically no overhead in performance. This also | ||
123 | has a side effect of enabling or disabling specific functions | ||
124 | to be traced. Echoing names of functions into this file | ||
125 | will limit the trace to only those functions. | ||
126 | |||
127 | set_ftrace_notrace: This has an effect opposite to that of | ||
128 | set_ftrace_filter. Any function that is added here will not | ||
129 | be traced. If a function exists in both set_ftrace_filter | ||
130 | and set_ftrace_notrace, the function will _not_ be traced. | ||
131 | |||
132 | available_filter_functions : When a function is encountered the first | ||
133 | time by the dynamic tracer, it is recorded and | ||
134 | later the call is converted into a nop. This file | ||
135 | lists the functions that have been recorded | ||
136 | by the dynamic tracer and these functions can | ||
137 | be used to set the ftrace filter by the above | ||
138 | "set_ftrace_filter" file. (See the section "dynamic ftrace" | ||
139 | below for more details). | ||
140 | |||
141 | |||
142 | The Tracers | ||
143 | ----------- | ||
144 | |||
145 | Here is the list of current tracers that may be configured. | ||
146 | |||
147 | ftrace - function tracer that uses mcount to trace all functions. | ||
148 | |||
149 | sched_switch - traces the context switches between tasks. | ||
150 | |||
151 | irqsoff - traces the areas that disable interrupts and saves | ||
152 | the trace with the longest max latency. | ||
153 | See tracing_max_latency. When a new max is recorded, | ||
154 | it replaces the old trace. It is best to view this | ||
155 | trace via the latency_trace file. | ||
156 | |||
157 | preemptoff - Similar to irqsoff but traces and records the amount of | ||
158 | time for which preemption is disabled. | ||
159 | |||
160 | preemptirqsoff - Similar to irqsoff and preemptoff, but traces and | ||
161 | records the largest time for which irqs and/or preemption | ||
162 | is disabled. | ||
163 | |||
164 | wakeup - Traces and records the max latency that it takes for | ||
165 | the highest priority task to get scheduled after | ||
166 | it has been woken up. | ||
167 | |||
168 | none - This is not a tracer. To remove all tracers from tracing | ||
169 | simply echo "none" into current_tracer. | ||
170 | |||
171 | |||
172 | Examples of using the tracer | ||
173 | ---------------------------- | ||
174 | |||
175 | Here are typical examples of using the tracers when controlling them only | ||
176 | with the debugfs interface (without using any user-land utilities). | ||
177 | |||
178 | Output format: | ||
179 | -------------- | ||
180 | |||
181 | Here is an example of the output format of the file "trace" | ||
182 | |||
183 | -------- | ||
184 | # tracer: ftrace | ||
185 | # | ||
186 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
187 | # | | | | | | ||
188 | bash-4251 [01] 10152.583854: path_put <-path_walk | ||
189 | bash-4251 [01] 10152.583855: dput <-path_put | ||
190 | bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput | ||
191 | -------- | ||
192 | |||
193 | A header is printed with the tracer name that is represented by the trace. | ||
194 | In this case the tracer is "ftrace". Then a header showing the format. Task | ||
195 | name "bash", the task PID "4251", the CPU that it was running on | ||
196 | "01", the timestamp in <secs>.<usecs> format, the function name that was | ||
197 | traced "path_put" and the parent function that called this function | ||
198 | "path_walk". The timestamp is the time at which the function was | ||
199 | entered. | ||
200 | |||
201 | The sched_switch tracer also includes tracing of task wakeups and | ||
202 | context switches. | ||
203 | |||
204 | ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S | ||
205 | ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S | ||
206 | ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R | ||
207 | events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R | ||
208 | kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R | ||
209 | ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R | ||
210 | |||
211 | Wake ups are represented by a "+" and the context switches are shown as | ||
212 | "==>". The format is: | ||
213 | |||
214 | Context switches: | ||
215 | |||
216 | Previous task Next Task | ||
217 | |||
218 | <pid>:<prio>:<state> ==> <pid>:<prio>:<state> | ||
219 | |||
220 | Wake ups: | ||
221 | |||
222 | Current task Task waking up | ||
223 | |||
224 | <pid>:<prio>:<state> + <pid>:<prio>:<state> | ||
225 | |||
226 | The prio is the internal kernel priority, which is the inverse of the | ||
227 | priority that is usually displayed by user-space tools. Zero represents | ||
228 | the highest priority (99). Prio 100 starts the "nice" priorities with | ||
229 | 100 being equal to nice -20 and 139 being nice 19. The prio "140" is | ||
230 | reserved for the idle task which is the lowest priority thread (pid 0). | ||
231 | |||
232 | |||
233 | Latency trace format | ||
234 | -------------------- | ||
235 | |||
236 | For traces that display latency times, the latency_trace file gives | ||
237 | somewhat more information to see why a latency happened. Here is a typical | ||
238 | trace. | ||
239 | |||
240 | # tracer: irqsoff | ||
241 | # | ||
242 | irqsoff latency trace v1.1.5 on 2.6.26-rc8 | ||
243 | -------------------------------------------------------------------- | ||
244 | latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
245 | ----------------- | ||
246 | | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0) | ||
247 | ----------------- | ||
248 | => started at: apic_timer_interrupt | ||
249 | => ended at: do_softirq | ||
250 | |||
251 | # _------=> CPU# | ||
252 | # / _-----=> irqs-off | ||
253 | # | / _----=> need-resched | ||
254 | # || / _---=> hardirq/softirq | ||
255 | # ||| / _--=> preempt-depth | ||
256 | # |||| / | ||
257 | # ||||| delay | ||
258 | # cmd pid ||||| time | caller | ||
259 | # \ / ||||| \ | / | ||
260 | <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt) | ||
261 | <idle>-0 0d.s. 97us : __do_softirq (do_softirq) | ||
262 | <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq) | ||
263 | |||
264 | |||
265 | |||
266 | This shows that the current tracer is "irqsoff" tracing the time for which | ||
267 | interrupts were disabled. It gives the trace version and the version | ||
268 | of the kernel upon which this was executed on (2.6.26-rc8). Then it displays | ||
269 | the max latency in microsecs (97 us). The number of trace entries displayed | ||
270 | and the total number recorded (both are three: #3/3). The type of | ||
271 | preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero | ||
272 | and are reserved for later use. #P is the number of online CPUS (#P:2). | ||
273 | |||
274 | The task is the process that was running when the latency occurred. | ||
275 | (swapper pid: 0). | ||
276 | |||
277 | The start and stop (the functions in which the interrupts were disabled and | ||
278 | enabled respectively) that caused the latencies: | ||
279 | |||
280 | apic_timer_interrupt is where the interrupts were disabled. | ||
281 | do_softirq is where they were enabled again. | ||
282 | |||
283 | The next lines after the header are the trace itself. The header | ||
284 | explains which is which. | ||
285 | |||
286 | cmd: The name of the process in the trace. | ||
287 | |||
288 | pid: The PID of that process. | ||
289 | |||
290 | CPU#: The CPU which the process was running on. | ||
291 | |||
292 | irqs-off: 'd' interrupts are disabled. '.' otherwise. | ||
293 | |||
294 | need-resched: 'N' task need_resched is set, '.' otherwise. | ||
295 | |||
296 | hardirq/softirq: | ||
297 | 'H' - hard irq occurred inside a softirq. | ||
298 | 'h' - hard irq is running | ||
299 | 's' - soft irq is running | ||
300 | '.' - normal context. | ||
301 | |||
302 | preempt-depth: The level of preempt_disabled | ||
303 | |||
304 | The above is mostly meaningful for kernel developers. | ||
305 | |||
306 | time: This differs from the trace file output. The trace file output | ||
307 | includes an absolute timestamp. The timestamp used by the | ||
308 | latency_trace file is relative to the start of the trace. | ||
309 | |||
310 | delay: This is just to help catch your eye a bit better. And | ||
311 | needs to be fixed to be only relative to the same CPU. | ||
312 | The marks are determined by the difference between this | ||
313 | current trace and the next trace. | ||
314 | '!' - greater than preempt_mark_thresh (default 100) | ||
315 | '+' - greater than 1 microsecond | ||
316 | ' ' - less than or equal to 1 microsecond. | ||
317 | |||
318 | The rest is the same as the 'trace' file. | ||
319 | |||
320 | |||
321 | iter_ctrl | ||
322 | --------- | ||
323 | |||
324 | The iter_ctrl file is used to control what gets printed in the trace | ||
325 | output. To see what is available, simply cat the file: | ||
326 | |||
327 | cat /debug/tracing/iter_ctrl | ||
328 | print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \ | ||
329 | noblock nostacktrace nosched-tree | ||
330 | |||
331 | To disable one of the options, echo in the option prepended with "no". | ||
332 | |||
333 | echo noprint-parent > /debug/tracing/iter_ctrl | ||
334 | |||
335 | To enable an option, leave off the "no". | ||
336 | |||
337 | echo sym-offset > /debug/tracing/iter_ctrl | ||
338 | |||
339 | Here are the available options: | ||
340 | |||
341 | print-parent - On function traces, display the calling function | ||
342 | as well as the function being traced. | ||
343 | |||
344 | print-parent: | ||
345 | bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul | ||
346 | |||
347 | noprint-parent: | ||
348 | bash-4000 [01] 1477.606694: simple_strtoul | ||
349 | |||
350 | |||
351 | sym-offset - Display not only the function name, but also the offset | ||
352 | in the function. For example, instead of seeing just | ||
353 | "ktime_get", you will see "ktime_get+0xb/0x20". | ||
354 | |||
355 | sym-offset: | ||
356 | bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0 | ||
357 | |||
358 | sym-addr - this will also display the function address as well as | ||
359 | the function name. | ||
360 | |||
361 | sym-addr: | ||
362 | bash-4000 [01] 1477.606694: simple_strtoul <c0339346> | ||
363 | |||
364 | verbose - This deals with the latency_trace file. | ||
365 | |||
366 | bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ | ||
367 | (+0.000ms): simple_strtoul (strict_strtoul) | ||
368 | |||
369 | raw - This will display raw numbers. This option is best for use with | ||
370 | user applications that can translate the raw numbers better than | ||
371 | having it done in the kernel. | ||
372 | |||
373 | hex - Similar to raw, but the numbers will be in a hexadecimal format. | ||
374 | |||
375 | bin - This will print out the formats in raw binary. | ||
376 | |||
377 | block - TBD (needs update) | ||
378 | |||
379 | stacktrace - This is one of the options that changes the trace itself. | ||
380 | When a trace is recorded, so is the stack of functions. | ||
381 | This allows for back traces of trace sites. | ||
382 | |||
383 | sched-tree - TBD (any users??) | ||
384 | |||
385 | |||
386 | sched_switch | ||
387 | ------------ | ||
388 | |||
389 | This tracer simply records schedule switches. Here is an example | ||
390 | of how to use it. | ||
391 | |||
392 | # echo sched_switch > /debug/tracing/current_tracer | ||
393 | # echo 1 > /debug/tracing/tracing_enabled | ||
394 | # sleep 1 | ||
395 | # echo 0 > /debug/tracing/tracing_enabled | ||
396 | # cat /debug/tracing/trace | ||
397 | |||
398 | # tracer: sched_switch | ||
399 | # | ||
400 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
401 | # | | | | | | ||
402 | bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R | ||
403 | bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R | ||
404 | sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R | ||
405 | bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S | ||
406 | bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R | ||
407 | sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R | ||
408 | bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D | ||
409 | bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R | ||
410 | <idle>-0 [00] 240.132589: 0:140:R + 4:115:S | ||
411 | <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R | ||
412 | ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R | ||
413 | <idle>-0 [00] 240.132598: 0:140:R + 4:115:S | ||
414 | <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R | ||
415 | ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R | ||
416 | sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R | ||
417 | [...] | ||
418 | |||
419 | |||
420 | As we have discussed previously about this format, the header shows | ||
421 | the name of the trace and points to the options. The "FUNCTION" | ||
422 | is a misnomer since here it represents the wake ups and context | ||
423 | switches. | ||
424 | |||
425 | The sched_switch file only lists the wake ups (represented with '+') | ||
426 | and context switches ('==>') with the previous task or current task | ||
427 | first followed by the next task or task waking up. The format for both | ||
428 | of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO | ||
429 | is the inverse of the actual priority with zero (0) being the highest | ||
430 | priority and the nice values starting at 100 (nice -20). Below is | ||
431 | a quick chart to map the kernel priority to user land priorities. | ||
432 | |||
433 | Kernel priority: 0 to 99 ==> user RT priority 99 to 0 | ||
434 | Kernel priority: 100 to 139 ==> user nice -20 to 19 | ||
435 | Kernel priority: 140 ==> idle task priority | ||
436 | |||
437 | The task states are: | ||
438 | |||
439 | R - running : wants to run, may not actually be running | ||
440 | S - sleep : process is waiting to be woken up (handles signals) | ||
441 | D - disk sleep (uninterruptible sleep) : process must be woken up | ||
442 | (ignores signals) | ||
443 | T - stopped : process suspended | ||
444 | t - traced : process is being traced (with something like gdb) | ||
445 | Z - zombie : process waiting to be cleaned up | ||
446 | X - unknown | ||
447 | |||
448 | |||
449 | ftrace_enabled | ||
450 | -------------- | ||
451 | |||
452 | The following tracers (listed below) give different output depending | ||
453 | on whether or not the sysctl ftrace_enabled is set. To set ftrace_enabled, | ||
454 | one can either use the sysctl function or set it via the proc | ||
455 | file system interface. | ||
456 | |||
457 | sysctl kernel.ftrace_enabled=1 | ||
458 | |||
459 | or | ||
460 | |||
461 | echo 1 > /proc/sys/kernel/ftrace_enabled | ||
462 | |||
463 | To disable ftrace_enabled simply replace the '1' with '0' in | ||
464 | the above commands. | ||
465 | |||
466 | When ftrace_enabled is set the tracers will also record the functions | ||
467 | that are within the trace. The descriptions of the tracers | ||
468 | will also show an example with ftrace enabled. | ||
469 | |||
470 | |||
471 | irqsoff | ||
472 | ------- | ||
473 | |||
474 | When interrupts are disabled, the CPU can not react to any other | ||
475 | external event (besides NMIs and SMIs). This prevents the timer | ||
476 | interrupt from triggering or the mouse interrupt from letting the | ||
477 | kernel know of a new mouse event. The result is a latency with the | ||
478 | reaction time. | ||
479 | |||
480 | The irqsoff tracer tracks the time for which interrupts are disabled. | ||
481 | When a new maximum latency is hit, the tracer saves the trace leading up | ||
482 | to that latency point so that every time a new maximum is reached, the old | ||
483 | saved trace is discarded and the new trace is saved. | ||
484 | |||
485 | To reset the maximum, echo 0 into tracing_max_latency. Here is an | ||
486 | example: | ||
487 | |||
488 | # echo irqsoff > /debug/tracing/current_tracer | ||
489 | # echo 0 > /debug/tracing/tracing_max_latency | ||
490 | # echo 1 > /debug/tracing/tracing_enabled | ||
491 | # ls -ltr | ||
492 | [...] | ||
493 | # echo 0 > /debug/tracing/tracing_enabled | ||
494 | # cat /debug/tracing/latency_trace | ||
495 | # tracer: irqsoff | ||
496 | # | ||
497 | irqsoff latency trace v1.1.5 on 2.6.26 | ||
498 | -------------------------------------------------------------------- | ||
499 | latency: 12 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
500 | ----------------- | ||
501 | | task: bash-3730 (uid:0 nice:0 policy:0 rt_prio:0) | ||
502 | ----------------- | ||
503 | => started at: sys_setpgid | ||
504 | => ended at: sys_setpgid | ||
505 | |||
506 | # _------=> CPU# | ||
507 | # / _-----=> irqs-off | ||
508 | # | / _----=> need-resched | ||
509 | # || / _---=> hardirq/softirq | ||
510 | # ||| / _--=> preempt-depth | ||
511 | # |||| / | ||
512 | # ||||| delay | ||
513 | # cmd pid ||||| time | caller | ||
514 | # \ / ||||| \ | / | ||
515 | bash-3730 1d... 0us : _write_lock_irq (sys_setpgid) | ||
516 | bash-3730 1d..1 1us+: _write_unlock_irq (sys_setpgid) | ||
517 | bash-3730 1d..2 14us : trace_hardirqs_on (sys_setpgid) | ||
518 | |||
519 | |||
520 | Here we see that that we had a latency of 12 microsecs (which is | ||
521 | very good). The _write_lock_irq in sys_setpgid disabled interrupts. | ||
522 | The difference between the 12 and the displayed timestamp 14us occurred | ||
523 | because the clock was incremented between the time of recording the max | ||
524 | latency and the time of recording the function that had that latency. | ||
525 | |||
526 | Note the above example had ftrace_enabled not set. If we set the | ||
527 | ftrace_enabled, we get a much larger output: | ||
528 | |||
529 | # tracer: irqsoff | ||
530 | # | ||
531 | irqsoff latency trace v1.1.5 on 2.6.26-rc8 | ||
532 | -------------------------------------------------------------------- | ||
533 | latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
534 | ----------------- | ||
535 | | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0) | ||
536 | ----------------- | ||
537 | => started at: __alloc_pages_internal | ||
538 | => ended at: __alloc_pages_internal | ||
539 | |||
540 | # _------=> CPU# | ||
541 | # / _-----=> irqs-off | ||
542 | # | / _----=> need-resched | ||
543 | # || / _---=> hardirq/softirq | ||
544 | # ||| / _--=> preempt-depth | ||
545 | # |||| / | ||
546 | # ||||| delay | ||
547 | # cmd pid ||||| time | caller | ||
548 | # \ / ||||| \ | / | ||
549 | ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal) | ||
550 | ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist) | ||
551 | ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk) | ||
552 | ls-4339 0d..1 4us : add_preempt_count (_spin_lock) | ||
553 | ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk) | ||
554 | ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue) | ||
555 | ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest) | ||
556 | ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk) | ||
557 | ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue) | ||
558 | ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest) | ||
559 | ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk) | ||
560 | ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue) | ||
561 | [...] | ||
562 | ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue) | ||
563 | ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest) | ||
564 | ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk) | ||
565 | ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue) | ||
566 | ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest) | ||
567 | ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk) | ||
568 | ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock) | ||
569 | ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal) | ||
570 | ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal) | ||
571 | |||
572 | |||
573 | |||
574 | Here we traced a 50 microsecond latency. But we also see all the | ||
575 | functions that were called during that time. Note that by enabling | ||
576 | function tracing, we incur an added overhead. This overhead may | ||
577 | extend the latency times. But nevertheless, this trace has provided | ||
578 | some very helpful debugging information. | ||
579 | |||
580 | |||
581 | preemptoff | ||
582 | ---------- | ||
583 | |||
584 | When preemption is disabled, we may be able to receive interrupts but | ||
585 | the task cannot be preempted and a higher priority task must wait | ||
586 | for preemption to be enabled again before it can preempt a lower | ||
587 | priority task. | ||
588 | |||
589 | The preemptoff tracer traces the places that disable preemption. | ||
590 | Like the irqsoff tracer, it records the maximum latency for which preemption | ||
591 | was disabled. The control of preemptoff tracer is much like the irqsoff | ||
592 | tracer. | ||
593 | |||
594 | # echo preemptoff > /debug/tracing/current_tracer | ||
595 | # echo 0 > /debug/tracing/tracing_max_latency | ||
596 | # echo 1 > /debug/tracing/tracing_enabled | ||
597 | # ls -ltr | ||
598 | [...] | ||
599 | # echo 0 > /debug/tracing/tracing_enabled | ||
600 | # cat /debug/tracing/latency_trace | ||
601 | # tracer: preemptoff | ||
602 | # | ||
603 | preemptoff latency trace v1.1.5 on 2.6.26-rc8 | ||
604 | -------------------------------------------------------------------- | ||
605 | latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
606 | ----------------- | ||
607 | | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) | ||
608 | ----------------- | ||
609 | => started at: do_IRQ | ||
610 | => ended at: __do_softirq | ||
611 | |||
612 | # _------=> CPU# | ||
613 | # / _-----=> irqs-off | ||
614 | # | / _----=> need-resched | ||
615 | # || / _---=> hardirq/softirq | ||
616 | # ||| / _--=> preempt-depth | ||
617 | # |||| / | ||
618 | # ||||| delay | ||
619 | # cmd pid ||||| time | caller | ||
620 | # \ / ||||| \ | / | ||
621 | sshd-4261 0d.h. 0us+: irq_enter (do_IRQ) | ||
622 | sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq) | ||
623 | sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq) | ||
624 | |||
625 | |||
626 | This has some more changes. Preemption was disabled when an interrupt | ||
627 | came in (notice the 'h'), and was enabled while doing a softirq. | ||
628 | (notice the 's'). But we also see that interrupts have been disabled | ||
629 | when entering the preempt off section and leaving it (the 'd'). | ||
630 | We do not know if interrupts were enabled in the mean time. | ||
631 | |||
632 | # tracer: preemptoff | ||
633 | # | ||
634 | preemptoff latency trace v1.1.5 on 2.6.26-rc8 | ||
635 | -------------------------------------------------------------------- | ||
636 | latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
637 | ----------------- | ||
638 | | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) | ||
639 | ----------------- | ||
640 | => started at: remove_wait_queue | ||
641 | => ended at: __do_softirq | ||
642 | |||
643 | # _------=> CPU# | ||
644 | # / _-----=> irqs-off | ||
645 | # | / _----=> need-resched | ||
646 | # || / _---=> hardirq/softirq | ||
647 | # ||| / _--=> preempt-depth | ||
648 | # |||| / | ||
649 | # ||||| delay | ||
650 | # cmd pid ||||| time | caller | ||
651 | # \ / ||||| \ | / | ||
652 | sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue) | ||
653 | sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue) | ||
654 | sshd-4261 0d..1 2us : do_IRQ (common_interrupt) | ||
655 | sshd-4261 0d..1 2us : irq_enter (do_IRQ) | ||
656 | sshd-4261 0d..1 2us : idle_cpu (irq_enter) | ||
657 | sshd-4261 0d..1 3us : add_preempt_count (irq_enter) | ||
658 | sshd-4261 0d.h1 3us : idle_cpu (irq_enter) | ||
659 | sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ) | ||
660 | [...] | ||
661 | sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock) | ||
662 | sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq) | ||
663 | sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq) | ||
664 | sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq) | ||
665 | sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock) | ||
666 | sshd-4261 0d.h1 14us : irq_exit (do_IRQ) | ||
667 | sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit) | ||
668 | sshd-4261 0d..2 15us : do_softirq (irq_exit) | ||
669 | sshd-4261 0d... 15us : __do_softirq (do_softirq) | ||
670 | sshd-4261 0d... 16us : __local_bh_disable (__do_softirq) | ||
671 | sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable) | ||
672 | sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable) | ||
673 | sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable) | ||
674 | sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable) | ||
675 | [...] | ||
676 | sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable) | ||
677 | sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable) | ||
678 | sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable) | ||
679 | sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable) | ||
680 | sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip) | ||
681 | sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip) | ||
682 | sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable) | ||
683 | sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable) | ||
684 | [...] | ||
685 | sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq) | ||
686 | sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq) | ||
687 | |||
688 | |||
689 | The above is an example of the preemptoff trace with ftrace_enabled | ||
690 | set. Here we see that interrupts were disabled the entire time. | ||
691 | The irq_enter code lets us know that we entered an interrupt 'h'. | ||
692 | Before that, the functions being traced still show that it is not | ||
693 | in an interrupt, but we can see from the functions themselves that | ||
694 | this is not the case. | ||
695 | |||
696 | Notice that __do_softirq when called does not have a preempt_count. | ||
697 | It may seem that we missed a preempt enabling. What really happened | ||
698 | is that the preempt count is held on the thread's stack and we | ||
699 | switched to the softirq stack (4K stacks in effect). The code | ||
700 | does not copy the preempt count, but because interrupts are disabled, | ||
701 | we do not need to worry about it. Having a tracer like this is good | ||
702 | for letting people know what really happens inside the kernel. | ||
703 | |||
704 | |||
705 | preemptirqsoff | ||
706 | -------------- | ||
707 | |||
708 | Knowing the locations that have interrupts disabled or preemption | ||
709 | disabled for the longest times is helpful. But sometimes we would | ||
710 | like to know when either preemption and/or interrupts are disabled. | ||
711 | |||
712 | Consider the following code: | ||
713 | |||
714 | local_irq_disable(); | ||
715 | call_function_with_irqs_off(); | ||
716 | preempt_disable(); | ||
717 | call_function_with_irqs_and_preemption_off(); | ||
718 | local_irq_enable(); | ||
719 | call_function_with_preemption_off(); | ||
720 | preempt_enable(); | ||
721 | |||
722 | The irqsoff tracer will record the total length of | ||
723 | call_function_with_irqs_off() and | ||
724 | call_function_with_irqs_and_preemption_off(). | ||
725 | |||
726 | The preemptoff tracer will record the total length of | ||
727 | call_function_with_irqs_and_preemption_off() and | ||
728 | call_function_with_preemption_off(). | ||
729 | |||
730 | But neither will trace the time that interrupts and/or preemption | ||
731 | is disabled. This total time is the time that we can not schedule. | ||
732 | To record this time, use the preemptirqsoff tracer. | ||
733 | |||
734 | Again, using this trace is much like the irqsoff and preemptoff tracers. | ||
735 | |||
736 | # echo preemptirqsoff > /debug/tracing/current_tracer | ||
737 | # echo 0 > /debug/tracing/tracing_max_latency | ||
738 | # echo 1 > /debug/tracing/tracing_enabled | ||
739 | # ls -ltr | ||
740 | [...] | ||
741 | # echo 0 > /debug/tracing/tracing_enabled | ||
742 | # cat /debug/tracing/latency_trace | ||
743 | # tracer: preemptirqsoff | ||
744 | # | ||
745 | preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 | ||
746 | -------------------------------------------------------------------- | ||
747 | latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
748 | ----------------- | ||
749 | | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0) | ||
750 | ----------------- | ||
751 | => started at: apic_timer_interrupt | ||
752 | => ended at: __do_softirq | ||
753 | |||
754 | # _------=> CPU# | ||
755 | # / _-----=> irqs-off | ||
756 | # | / _----=> need-resched | ||
757 | # || / _---=> hardirq/softirq | ||
758 | # ||| / _--=> preempt-depth | ||
759 | # |||| / | ||
760 | # ||||| delay | ||
761 | # cmd pid ||||| time | caller | ||
762 | # \ / ||||| \ | / | ||
763 | ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt) | ||
764 | ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq) | ||
765 | ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq) | ||
766 | |||
767 | |||
768 | |||
769 | The trace_hardirqs_off_thunk is called from assembly on x86 when | ||
770 | interrupts are disabled in the assembly code. Without the function | ||
771 | tracing, we do not know if interrupts were enabled within the preemption | ||
772 | points. We do see that it started with preemption enabled. | ||
773 | |||
774 | Here is a trace with ftrace_enabled set: | ||
775 | |||
776 | |||
777 | # tracer: preemptirqsoff | ||
778 | # | ||
779 | preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 | ||
780 | -------------------------------------------------------------------- | ||
781 | latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
782 | ----------------- | ||
783 | | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) | ||
784 | ----------------- | ||
785 | => started at: write_chan | ||
786 | => ended at: __do_softirq | ||
787 | |||
788 | # _------=> CPU# | ||
789 | # / _-----=> irqs-off | ||
790 | # | / _----=> need-resched | ||
791 | # || / _---=> hardirq/softirq | ||
792 | # ||| / _--=> preempt-depth | ||
793 | # |||| / | ||
794 | # ||||| delay | ||
795 | # cmd pid ||||| time | caller | ||
796 | # \ / ||||| \ | / | ||
797 | ls-4473 0.N.. 0us : preempt_schedule (write_chan) | ||
798 | ls-4473 0dN.1 1us : _spin_lock (schedule) | ||
799 | ls-4473 0dN.1 2us : add_preempt_count (_spin_lock) | ||
800 | ls-4473 0d..2 2us : put_prev_task_fair (schedule) | ||
801 | [...] | ||
802 | ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts) | ||
803 | ls-4473 0d..2 13us : __switch_to (schedule) | ||
804 | sshd-4261 0d..2 14us : finish_task_switch (schedule) | ||
805 | sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch) | ||
806 | sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave) | ||
807 | sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set) | ||
808 | sshd-4261 0d..2 16us : do_IRQ (common_interrupt) | ||
809 | sshd-4261 0d..2 17us : irq_enter (do_IRQ) | ||
810 | sshd-4261 0d..2 17us : idle_cpu (irq_enter) | ||
811 | sshd-4261 0d..2 18us : add_preempt_count (irq_enter) | ||
812 | sshd-4261 0d.h2 18us : idle_cpu (irq_enter) | ||
813 | sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ) | ||
814 | sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq) | ||
815 | sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock) | ||
816 | sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq) | ||
817 | sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock) | ||
818 | [...] | ||
819 | sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq) | ||
820 | sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock) | ||
821 | sshd-4261 0d.h2 29us : irq_exit (do_IRQ) | ||
822 | sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit) | ||
823 | sshd-4261 0d..3 30us : do_softirq (irq_exit) | ||
824 | sshd-4261 0d... 30us : __do_softirq (do_softirq) | ||
825 | sshd-4261 0d... 31us : __local_bh_disable (__do_softirq) | ||
826 | sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable) | ||
827 | sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable) | ||
828 | [...] | ||
829 | sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip) | ||
830 | sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip) | ||
831 | sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt) | ||
832 | sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt) | ||
833 | sshd-4261 0d.s3 45us : idle_cpu (irq_enter) | ||
834 | sshd-4261 0d.s3 46us : add_preempt_count (irq_enter) | ||
835 | sshd-4261 0d.H3 46us : idle_cpu (irq_enter) | ||
836 | sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt) | ||
837 | sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt) | ||
838 | [...] | ||
839 | sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt) | ||
840 | sshd-4261 0d.H3 82us : ktime_get (tick_program_event) | ||
841 | sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get) | ||
842 | sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts) | ||
843 | sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts) | ||
844 | sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event) | ||
845 | sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event) | ||
846 | sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt) | ||
847 | sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit) | ||
848 | sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit) | ||
849 | sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable) | ||
850 | [...] | ||
851 | sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action) | ||
852 | sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq) | ||
853 | sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq) | ||
854 | sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq) | ||
855 | sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable) | ||
856 | sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq) | ||
857 | sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq) | ||
858 | |||
859 | |||
860 | This is a very interesting trace. It started with the preemption of | ||
861 | the ls task. We see that the task had the "need_resched" bit set | ||
862 | via the 'N' in the trace. Interrupts were disabled before the spin_lock | ||
863 | at the beginning of the trace. We see that a schedule took place to run | ||
864 | sshd. When the interrupts were enabled, we took an interrupt. | ||
865 | On return from the interrupt handler, the softirq ran. We took another | ||
866 | interrupt while running the softirq as we see from the capital 'H'. | ||
867 | |||
868 | |||
869 | wakeup | ||
870 | ------ | ||
871 | |||
872 | In a Real-Time environment it is very important to know the wakeup | ||
873 | time it takes for the highest priority task that is woken up to the | ||
874 | time that it executes. This is also known as "schedule latency". | ||
875 | I stress the point that this is about RT tasks. It is also important | ||
876 | to know the scheduling latency of non-RT tasks, but the average | ||
877 | schedule latency is better for non-RT tasks. Tools like | ||
878 | LatencyTop are more appropriate for such measurements. | ||
879 | |||
880 | Real-Time environments are interested in the worst case latency. | ||
881 | That is the longest latency it takes for something to happen, and | ||
882 | not the average. We can have a very fast scheduler that may only | ||
883 | have a large latency once in a while, but that would not work well | ||
884 | with Real-Time tasks. The wakeup tracer was designed to record | ||
885 | the worst case wakeups of RT tasks. Non-RT tasks are not recorded | ||
886 | because the tracer only records one worst case and tracing non-RT | ||
887 | tasks that are unpredictable will overwrite the worst case latency | ||
888 | of RT tasks. | ||
889 | |||
890 | Since this tracer only deals with RT tasks, we will run this slightly | ||
891 | differently than we did with the previous tracers. Instead of performing | ||
892 | an 'ls', we will run 'sleep 1' under 'chrt' which changes the | ||
893 | priority of the task. | ||
894 | |||
895 | # echo wakeup > /debug/tracing/current_tracer | ||
896 | # echo 0 > /debug/tracing/tracing_max_latency | ||
897 | # echo 1 > /debug/tracing/tracing_enabled | ||
898 | # chrt -f 5 sleep 1 | ||
899 | # echo 0 > /debug/tracing/tracing_enabled | ||
900 | # cat /debug/tracing/latency_trace | ||
901 | # tracer: wakeup | ||
902 | # | ||
903 | wakeup latency trace v1.1.5 on 2.6.26-rc8 | ||
904 | -------------------------------------------------------------------- | ||
905 | latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
906 | ----------------- | ||
907 | | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5) | ||
908 | ----------------- | ||
909 | |||
910 | # _------=> CPU# | ||
911 | # / _-----=> irqs-off | ||
912 | # | / _----=> need-resched | ||
913 | # || / _---=> hardirq/softirq | ||
914 | # ||| / _--=> preempt-depth | ||
915 | # |||| / | ||
916 | # ||||| delay | ||
917 | # cmd pid ||||| time | caller | ||
918 | # \ / ||||| \ | / | ||
919 | <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process) | ||
920 | <idle>-0 1d..4 4us : schedule (cpu_idle) | ||
921 | |||
922 | |||
923 | |||
924 | Running this on an idle system, we see that it only took 4 microseconds | ||
925 | to perform the task switch. Note, since the trace marker in the | ||
926 | schedule is before the actual "switch", we stop the tracing when | ||
927 | the recorded task is about to schedule in. This may change if | ||
928 | we add a new marker at the end of the scheduler. | ||
929 | |||
930 | Notice that the recorded task is 'sleep' with the PID of 4901 and it | ||
931 | has an rt_prio of 5. This priority is user-space priority and not | ||
932 | the internal kernel priority. The policy is 1 for SCHED_FIFO and 2 | ||
933 | for SCHED_RR. | ||
934 | |||
935 | Doing the same with chrt -r 5 and ftrace_enabled set. | ||
936 | |||
937 | # tracer: wakeup | ||
938 | # | ||
939 | wakeup latency trace v1.1.5 on 2.6.26-rc8 | ||
940 | -------------------------------------------------------------------- | ||
941 | latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) | ||
942 | ----------------- | ||
943 | | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5) | ||
944 | ----------------- | ||
945 | |||
946 | # _------=> CPU# | ||
947 | # / _-----=> irqs-off | ||
948 | # | / _----=> need-resched | ||
949 | # || / _---=> hardirq/softirq | ||
950 | # ||| / _--=> preempt-depth | ||
951 | # |||| / | ||
952 | # ||||| delay | ||
953 | # cmd pid ||||| time | caller | ||
954 | # \ / ||||| \ | / | ||
955 | ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process) | ||
956 | ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb) | ||
957 | ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up) | ||
958 | ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup) | ||
959 | ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr) | ||
960 | ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup) | ||
961 | ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up) | ||
962 | ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up) | ||
963 | [...] | ||
964 | ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt) | ||
965 | ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit) | ||
966 | ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit) | ||
967 | ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq) | ||
968 | [...] | ||
969 | ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks) | ||
970 | ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq) | ||
971 | ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable) | ||
972 | ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd) | ||
973 | ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd) | ||
974 | ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched) | ||
975 | ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched) | ||
976 | ksoftirq-7 1.N.2 33us : schedule (__cond_resched) | ||
977 | ksoftirq-7 1.N.2 33us : add_preempt_count (schedule) | ||
978 | ksoftirq-7 1.N.3 34us : hrtick_clear (schedule) | ||
979 | ksoftirq-7 1dN.3 35us : _spin_lock (schedule) | ||
980 | ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock) | ||
981 | ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule) | ||
982 | ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair) | ||
983 | [...] | ||
984 | ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline) | ||
985 | ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock) | ||
986 | ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline) | ||
987 | ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock) | ||
988 | ksoftirq-7 1d..4 50us : schedule (__cond_resched) | ||
989 | |||
990 | The interrupt went off while running ksoftirqd. This task runs at | ||
991 | SCHED_OTHER. Why did not we see the 'N' set early? This may be | ||
992 | a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K stacks | ||
993 | configured, the interrupt and softirq run with their own stack. | ||
994 | Some information is held on the top of the task's stack (need_resched | ||
995 | and preempt_count are both stored there). The setting of the NEED_RESCHED | ||
996 | bit is done directly to the task's stack, but the reading of the | ||
997 | NEED_RESCHED is done by looking at the current stack, which in this case | ||
998 | is the stack for the hard interrupt. This hides the fact that NEED_RESCHED | ||
999 | has been set. We do not see the 'N' until we switch back to the task's | ||
1000 | assigned stack. | ||
1001 | |||
1002 | ftrace | ||
1003 | ------ | ||
1004 | |||
1005 | ftrace is not only the name of the tracing infrastructure, but it | ||
1006 | is also a name of one of the tracers. The tracer is the function | ||
1007 | tracer. Enabling the function tracer can be done from the | ||
1008 | debug file system. Make sure the ftrace_enabled is set otherwise | ||
1009 | this tracer is a nop. | ||
1010 | |||
1011 | # sysctl kernel.ftrace_enabled=1 | ||
1012 | # echo ftrace > /debug/tracing/current_tracer | ||
1013 | # echo 1 > /debug/tracing/tracing_enabled | ||
1014 | # usleep 1 | ||
1015 | # echo 0 > /debug/tracing/tracing_enabled | ||
1016 | # cat /debug/tracing/trace | ||
1017 | # tracer: ftrace | ||
1018 | # | ||
1019 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
1020 | # | | | | | | ||
1021 | bash-4003 [00] 123.638713: finish_task_switch <-schedule | ||
1022 | bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch | ||
1023 | bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq | ||
1024 | bash-4003 [00] 123.638715: hrtick_set <-schedule | ||
1025 | bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set | ||
1026 | bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave | ||
1027 | bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set | ||
1028 | bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore | ||
1029 | bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set | ||
1030 | bash-4003 [00] 123.638718: sub_preempt_count <-schedule | ||
1031 | bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule | ||
1032 | bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run | ||
1033 | bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion | ||
1034 | bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common | ||
1035 | bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq | ||
1036 | [...] | ||
1037 | |||
1038 | |||
1039 | Note: ftrace uses ring buffers to store the above entries. The newest data | ||
1040 | may overwrite the oldest data. Sometimes using echo to stop the trace | ||
1041 | is not sufficient because the tracing could have overwritten the data | ||
1042 | that you wanted to record. For this reason, it is sometimes better to | ||
1043 | disable tracing directly from a program. This allows you to stop the | ||
1044 | tracing at the point that you hit the part that you are interested in. | ||
1045 | To disable the tracing directly from a C program, something like following | ||
1046 | code snippet can be used: | ||
1047 | |||
1048 | int trace_fd; | ||
1049 | [...] | ||
1050 | int main(int argc, char *argv[]) { | ||
1051 | [...] | ||
1052 | trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY); | ||
1053 | [...] | ||
1054 | if (condition_hit()) { | ||
1055 | write(trace_fd, "0", 1); | ||
1056 | } | ||
1057 | [...] | ||
1058 | } | ||
1059 | |||
1060 | Note: Here we hard coded the path name. The debugfs mount is not | ||
1061 | guaranteed to be at /debug (and is more commonly at /sys/kernel/debug). | ||
1062 | For simple one time traces, the above is sufficent. For anything else, | ||
1063 | a search through /proc/mounts may be needed to find where the debugfs | ||
1064 | file-system is mounted. | ||
1065 | |||
1066 | dynamic ftrace | ||
1067 | -------------- | ||
1068 | |||
1069 | If CONFIG_DYNAMIC_FTRACE is set, the system will run with | ||
1070 | virtually no overhead when function tracing is disabled. The way | ||
1071 | this works is the mcount function call (placed at the start of | ||
1072 | every kernel function, produced by the -pg switch in gcc), starts | ||
1073 | of pointing to a simple return. (Enabling FTRACE will include the | ||
1074 | -pg switch in the compiling of the kernel.) | ||
1075 | |||
1076 | When dynamic ftrace is initialized, it calls kstop_machine to make | ||
1077 | the machine act like a uniprocessor so that it can freely modify code | ||
1078 | without worrying about other processors executing that same code. At | ||
1079 | initialization, the mcount calls are changed to call a "record_ip" | ||
1080 | function. After this, the first time a kernel function is called, | ||
1081 | it has the calling address saved in a hash table. | ||
1082 | |||
1083 | Later on the ftraced kernel thread is awoken and will again call | ||
1084 | kstop_machine if new functions have been recorded. The ftraced thread | ||
1085 | will change all calls to mcount to "nop". Just calling mcount | ||
1086 | and having mcount return has shown a 10% overhead. By converting | ||
1087 | it to a nop, there is no measurable overhead to the system. | ||
1088 | |||
1089 | One special side-effect to the recording of the functions being | ||
1090 | traced is that we can now selectively choose which functions we | ||
1091 | wish to trace and which ones we want the mcount calls to remain as | ||
1092 | nops. | ||
1093 | |||
1094 | Two files are used, one for enabling and one for disabling the tracing | ||
1095 | of specified functions. They are: | ||
1096 | |||
1097 | set_ftrace_filter | ||
1098 | |||
1099 | and | ||
1100 | |||
1101 | set_ftrace_notrace | ||
1102 | |||
1103 | A list of available functions that you can add to these files is listed | ||
1104 | in: | ||
1105 | |||
1106 | available_filter_functions | ||
1107 | |||
1108 | # cat /debug/tracing/available_filter_functions | ||
1109 | put_prev_task_idle | ||
1110 | kmem_cache_create | ||
1111 | pick_next_task_rt | ||
1112 | get_online_cpus | ||
1113 | pick_next_task_fair | ||
1114 | mutex_lock | ||
1115 | [...] | ||
1116 | |||
1117 | If I am only interested in sys_nanosleep and hrtimer_interrupt: | ||
1118 | |||
1119 | # echo sys_nanosleep hrtimer_interrupt \ | ||
1120 | > /debug/tracing/set_ftrace_filter | ||
1121 | # echo ftrace > /debug/tracing/current_tracer | ||
1122 | # echo 1 > /debug/tracing/tracing_enabled | ||
1123 | # usleep 1 | ||
1124 | # echo 0 > /debug/tracing/tracing_enabled | ||
1125 | # cat /debug/tracing/trace | ||
1126 | # tracer: ftrace | ||
1127 | # | ||
1128 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
1129 | # | | | | | | ||
1130 | usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt | ||
1131 | usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call | ||
1132 | <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt | ||
1133 | |||
1134 | To see which functions are being traced, you can cat the file: | ||
1135 | |||
1136 | # cat /debug/tracing/set_ftrace_filter | ||
1137 | hrtimer_interrupt | ||
1138 | sys_nanosleep | ||
1139 | |||
1140 | |||
1141 | Perhaps this is not enough. The filters also allow simple wild cards. | ||
1142 | Only the following are currently available | ||
1143 | |||
1144 | <match>* - will match functions that begin with <match> | ||
1145 | *<match> - will match functions that end with <match> | ||
1146 | *<match>* - will match functions that have <match> in it | ||
1147 | |||
1148 | These are the only wild cards which are supported. | ||
1149 | |||
1150 | <match>*<match> will not work. | ||
1151 | |||
1152 | # echo hrtimer_* > /debug/tracing/set_ftrace_filter | ||
1153 | |||
1154 | Produces: | ||
1155 | |||
1156 | # tracer: ftrace | ||
1157 | # | ||
1158 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
1159 | # | | | | | | ||
1160 | bash-4003 [00] 1480.611794: hrtimer_init <-copy_process | ||
1161 | bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set | ||
1162 | bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear | ||
1163 | bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel | ||
1164 | <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt | ||
1165 | <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt | ||
1166 | <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt | ||
1167 | <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt | ||
1168 | <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt | ||
1169 | |||
1170 | |||
1171 | Notice that we lost the sys_nanosleep. | ||
1172 | |||
1173 | # cat /debug/tracing/set_ftrace_filter | ||
1174 | hrtimer_run_queues | ||
1175 | hrtimer_run_pending | ||
1176 | hrtimer_init | ||
1177 | hrtimer_cancel | ||
1178 | hrtimer_try_to_cancel | ||
1179 | hrtimer_forward | ||
1180 | hrtimer_start | ||
1181 | hrtimer_reprogram | ||
1182 | hrtimer_force_reprogram | ||
1183 | hrtimer_get_next_event | ||
1184 | hrtimer_interrupt | ||
1185 | hrtimer_nanosleep | ||
1186 | hrtimer_wakeup | ||
1187 | hrtimer_get_remaining | ||
1188 | hrtimer_get_res | ||
1189 | hrtimer_init_sleeper | ||
1190 | |||
1191 | |||
1192 | This is because the '>' and '>>' act just like they do in bash. | ||
1193 | To rewrite the filters, use '>' | ||
1194 | To append to the filters, use '>>' | ||
1195 | |||
1196 | To clear out a filter so that all functions will be recorded again: | ||
1197 | |||
1198 | # echo > /debug/tracing/set_ftrace_filter | ||
1199 | # cat /debug/tracing/set_ftrace_filter | ||
1200 | # | ||
1201 | |||
1202 | Again, now we want to append. | ||
1203 | |||
1204 | # echo sys_nanosleep > /debug/tracing/set_ftrace_filter | ||
1205 | # cat /debug/tracing/set_ftrace_filter | ||
1206 | sys_nanosleep | ||
1207 | # echo hrtimer_* >> /debug/tracing/set_ftrace_filter | ||
1208 | # cat /debug/tracing/set_ftrace_filter | ||
1209 | hrtimer_run_queues | ||
1210 | hrtimer_run_pending | ||
1211 | hrtimer_init | ||
1212 | hrtimer_cancel | ||
1213 | hrtimer_try_to_cancel | ||
1214 | hrtimer_forward | ||
1215 | hrtimer_start | ||
1216 | hrtimer_reprogram | ||
1217 | hrtimer_force_reprogram | ||
1218 | hrtimer_get_next_event | ||
1219 | hrtimer_interrupt | ||
1220 | sys_nanosleep | ||
1221 | hrtimer_nanosleep | ||
1222 | hrtimer_wakeup | ||
1223 | hrtimer_get_remaining | ||
1224 | hrtimer_get_res | ||
1225 | hrtimer_init_sleeper | ||
1226 | |||
1227 | |||
1228 | The set_ftrace_notrace prevents those functions from being traced. | ||
1229 | |||
1230 | # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace | ||
1231 | |||
1232 | Produces: | ||
1233 | |||
1234 | # tracer: ftrace | ||
1235 | # | ||
1236 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
1237 | # | | | | | | ||
1238 | bash-4043 [01] 115.281644: finish_task_switch <-schedule | ||
1239 | bash-4043 [01] 115.281645: hrtick_set <-schedule | ||
1240 | bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set | ||
1241 | bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run | ||
1242 | bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion | ||
1243 | bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run | ||
1244 | bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop | ||
1245 | bash-4043 [01] 115.281648: wake_up_process <-kthread_stop | ||
1246 | bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process | ||
1247 | |||
1248 | We can see that there's no more lock or preempt tracing. | ||
1249 | |||
1250 | ftraced | ||
1251 | ------- | ||
1252 | |||
1253 | As mentioned above, when dynamic ftrace is configured in, a kernel | ||
1254 | thread wakes up once a second and checks to see if there are mcount | ||
1255 | calls that need to be converted into nops. If there are not any, then | ||
1256 | it simply goes back to sleep. But if there are some, it will call | ||
1257 | kstop_machine to convert the calls to nops. | ||
1258 | |||
1259 | There may be a case in which you do not want this added latency. | ||
1260 | Perhaps you are doing some audio recording and this activity might | ||
1261 | cause skips in the playback. There is an interface to disable | ||
1262 | and enable the "ftraced" kernel thread. | ||
1263 | |||
1264 | # echo 0 > /debug/tracing/ftraced_enabled | ||
1265 | |||
1266 | This will disable the calling of kstop_machine to update the | ||
1267 | mcount calls to nops. Remember that there is a large overhead | ||
1268 | to calling mcount. Without this kernel thread, that overhead will | ||
1269 | exist. | ||
1270 | |||
1271 | If there are recorded calls to mcount, any write to the ftraced_enabled | ||
1272 | file will cause the kstop_machine to run. This means that a | ||
1273 | user can manually perform the updates when they want to by simply | ||
1274 | echoing a '0' into the ftraced_enabled file. | ||
1275 | |||
1276 | The updates are also done at the beginning of enabling a tracer | ||
1277 | that uses ftrace function recording. | ||
1278 | |||
1279 | |||
1280 | trace_pipe | ||
1281 | ---------- | ||
1282 | |||
1283 | The trace_pipe outputs the same content as the trace file, but the effect | ||
1284 | on the tracing is different. Every read from trace_pipe is consumed. | ||
1285 | This means that subsequent reads will be different. The trace | ||
1286 | is live. | ||
1287 | |||
1288 | # echo ftrace > /debug/tracing/current_tracer | ||
1289 | # cat /debug/tracing/trace_pipe > /tmp/trace.out & | ||
1290 | [1] 4153 | ||
1291 | # echo 1 > /debug/tracing/tracing_enabled | ||
1292 | # usleep 1 | ||
1293 | # echo 0 > /debug/tracing/tracing_enabled | ||
1294 | # cat /debug/tracing/trace | ||
1295 | # tracer: ftrace | ||
1296 | # | ||
1297 | # TASK-PID CPU# TIMESTAMP FUNCTION | ||
1298 | # | | | | | | ||
1299 | |||
1300 | # | ||
1301 | # cat /tmp/trace.out | ||
1302 | bash-4043 [00] 41.267106: finish_task_switch <-schedule | ||
1303 | bash-4043 [00] 41.267106: hrtick_set <-schedule | ||
1304 | bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set | ||
1305 | bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run | ||
1306 | bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion | ||
1307 | bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run | ||
1308 | bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop | ||
1309 | bash-4043 [00] 41.267110: wake_up_process <-kthread_stop | ||
1310 | bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process | ||
1311 | bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up | ||
1312 | |||
1313 | |||
1314 | Note, reading the trace_pipe file will block until more input is added. | ||
1315 | By changing the tracer, trace_pipe will issue an EOF. We needed | ||
1316 | to set the ftrace tracer _before_ cating the trace_pipe file. | ||
1317 | |||
1318 | |||
1319 | trace entries | ||
1320 | ------------- | ||
1321 | |||
1322 | Having too much or not enough data can be troublesome in diagnosing | ||
1323 | an issue in the kernel. The file trace_entries is used to modify | ||
1324 | the size of the internal trace buffers. The number listed | ||
1325 | is the number of entries that can be recorded per CPU. To know | ||
1326 | the full size, multiply the number of possible CPUS with the | ||
1327 | number of entries. | ||
1328 | |||
1329 | # cat /debug/tracing/trace_entries | ||
1330 | 65620 | ||
1331 | |||
1332 | Note, to modify this, you must have tracing completely disabled. To do that, | ||
1333 | echo "none" into the current_tracer. If the current_tracer is not set | ||
1334 | to "none", an EINVAL error will be returned. | ||
1335 | |||
1336 | # echo none > /debug/tracing/current_tracer | ||
1337 | # echo 100000 > /debug/tracing/trace_entries | ||
1338 | # cat /debug/tracing/trace_entries | ||
1339 | 100045 | ||
1340 | |||
1341 | |||
1342 | Notice that we echoed in 100,000 but the size is 100,045. The entries | ||
1343 | are held in individual pages. It allocates the number of pages it takes | ||
1344 | to fulfill the request. If more entries may fit on the last page | ||
1345 | then they will be added. | ||
1346 | |||
1347 | # echo 1 > /debug/tracing/trace_entries | ||
1348 | # cat /debug/tracing/trace_entries | ||
1349 | 85 | ||
1350 | |||
1351 | This shows us that 85 entries can fit in a single page. | ||
1352 | |||
1353 | The number of pages which will be allocated is limited to a percentage | ||
1354 | of available memory. Allocating too much will produce an error. | ||
1355 | |||
1356 | # echo 1000000000000 > /debug/tracing/trace_entries | ||
1357 | -bash: echo: write error: Cannot allocate memory | ||
1358 | # cat /debug/tracing/trace_entries | ||
1359 | 85 | ||
1360 | |||