diff options
Diffstat (limited to 'Documentation/oops-tracing.txt')
-rw-r--r-- | Documentation/oops-tracing.txt | 229 |
1 files changed, 229 insertions, 0 deletions
diff --git a/Documentation/oops-tracing.txt b/Documentation/oops-tracing.txt new file mode 100644 index 000000000000..da711028e5f7 --- /dev/null +++ b/Documentation/oops-tracing.txt | |||
@@ -0,0 +1,229 @@ | |||
1 | NOTE: ksymoops is useless on 2.6. Please use the Oops in its original format | ||
2 | (from dmesg, etc). Ignore any references in this or other docs to "decoding | ||
3 | the Oops" or "running it through ksymoops". If you post an Oops fron 2.6 that | ||
4 | has been run through ksymoops, people will just tell you to repost it. | ||
5 | |||
6 | Quick Summary | ||
7 | ------------- | ||
8 | |||
9 | Find the Oops and send it to the maintainer of the kernel area that seems to be | ||
10 | involved with the problem. Don't worry too much about getting the wrong person. | ||
11 | If you are unsure send it to the person responsible for the code relevant to | ||
12 | what you were doing. If it occurs repeatably try and describe how to recreate | ||
13 | it. That's worth even more than the oops. | ||
14 | |||
15 | If you are totally stumped as to whom to send the report, send it to | ||
16 | linux-kernel@vger.kernel.org. Thanks for your help in making Linux as | ||
17 | stable as humanly possible. | ||
18 | |||
19 | Where is the Oops? | ||
20 | ---------------------- | ||
21 | |||
22 | Normally the Oops text is read from the kernel buffers by klogd and | ||
23 | handed to syslogd which writes it to a syslog file, typically | ||
24 | /var/log/messages (depends on /etc/syslog.conf). Sometimes klogd dies, | ||
25 | in which case you can run dmesg > file to read the data from the kernel | ||
26 | buffers and save it. Or you can cat /proc/kmsg > file, however you | ||
27 | have to break in to stop the transfer, kmsg is a "never ending file". | ||
28 | If the machine has crashed so badly that you cannot enter commands or | ||
29 | the disk is not available then you have three options :- | ||
30 | |||
31 | (1) Hand copy the text from the screen and type it in after the machine | ||
32 | has restarted. Messy but it is the only option if you have not | ||
33 | planned for a crash. | ||
34 | |||
35 | (2) Boot with a serial console (see Documentation/serial-console.txt), | ||
36 | run a null modem to a second machine and capture the output there | ||
37 | using your favourite communication program. Minicom works well. | ||
38 | |||
39 | (3) Patch the kernel with one of the crash dump patches. These save | ||
40 | data to a floppy disk or video rom or a swap partition. None of | ||
41 | these are standard kernel patches so you have to find and apply | ||
42 | them yourself. Search kernel archives for kmsgdump, lkcd and | ||
43 | oops+smram. | ||
44 | |||
45 | |||
46 | Full Information | ||
47 | ---------------- | ||
48 | |||
49 | NOTE: the message from Linus below applies to 2.4 kernel. I have preserved it | ||
50 | for historical reasons, and because some of the information in it still | ||
51 | applies. Especially, please ignore any references to ksymoops. | ||
52 | |||
53 | From: Linus Torvalds <torvalds@osdl.org> | ||
54 | |||
55 | How to track down an Oops.. [originally a mail to linux-kernel] | ||
56 | |||
57 | The main trick is having 5 years of experience with those pesky oops | ||
58 | messages ;-) | ||
59 | |||
60 | Actually, there are things you can do that make this easier. I have two | ||
61 | separate approaches: | ||
62 | |||
63 | gdb /usr/src/linux/vmlinux | ||
64 | gdb> disassemble <offending_function> | ||
65 | |||
66 | That's the easy way to find the problem, at least if the bug-report is | ||
67 | well made (like this one was - run through ksymoops to get the | ||
68 | information of which function and the offset in the function that it | ||
69 | happened in). | ||
70 | |||
71 | Oh, it helps if the report happens on a kernel that is compiled with the | ||
72 | same compiler and similar setups. | ||
73 | |||
74 | The other thing to do is disassemble the "Code:" part of the bug report: | ||
75 | ksymoops will do this too with the correct tools, but if you don't have | ||
76 | the tools you can just do a silly program: | ||
77 | |||
78 | char str[] = "\xXX\xXX\xXX..."; | ||
79 | main(){} | ||
80 | |||
81 | and compile it with gcc -g and then do "disassemble str" (where the "XX" | ||
82 | stuff are the values reported by the Oops - you can just cut-and-paste | ||
83 | and do a replace of spaces to "\x" - that's what I do, as I'm too lazy | ||
84 | to write a program to automate this all). | ||
85 | |||
86 | Finally, if you want to see where the code comes from, you can do | ||
87 | |||
88 | cd /usr/src/linux | ||
89 | make fs/buffer.s # or whatever file the bug happened in | ||
90 | |||
91 | and then you get a better idea of what happens than with the gdb | ||
92 | disassembly. | ||
93 | |||
94 | Now, the trick is just then to combine all the data you have: the C | ||
95 | sources (and general knowledge of what it _should_ do), the assembly | ||
96 | listing and the code disassembly (and additionally the register dump you | ||
97 | also get from the "oops" message - that can be useful to see _what_ the | ||
98 | corrupted pointers were, and when you have the assembler listing you can | ||
99 | also match the other registers to whatever C expressions they were used | ||
100 | for). | ||
101 | |||
102 | Essentially, you just look at what doesn't match (in this case it was the | ||
103 | "Code" disassembly that didn't match with what the compiler generated). | ||
104 | Then you need to find out _why_ they don't match. Often it's simple - you | ||
105 | see that the code uses a NULL pointer and then you look at the code and | ||
106 | wonder how the NULL pointer got there, and if it's a valid thing to do | ||
107 | you just check against it.. | ||
108 | |||
109 | Now, if somebody gets the idea that this is time-consuming and requires | ||
110 | some small amount of concentration, you're right. Which is why I will | ||
111 | mostly just ignore any panic reports that don't have the symbol table | ||
112 | info etc looked up: it simply gets too hard to look it up (I have some | ||
113 | programs to search for specific patterns in the kernel code segment, and | ||
114 | sometimes I have been able to look up those kinds of panics too, but | ||
115 | that really requires pretty good knowledge of the kernel just to be able | ||
116 | to pick out the right sequences etc..) | ||
117 | |||
118 | _Sometimes_ it happens that I just see the disassembled code sequence | ||
119 | from the panic, and I know immediately where it's coming from. That's when | ||
120 | I get worried that I've been doing this for too long ;-) | ||
121 | |||
122 | Linus | ||
123 | |||
124 | |||
125 | --------------------------------------------------------------------------- | ||
126 | Notes on Oops tracing with klogd: | ||
127 | |||
128 | In order to help Linus and the other kernel developers there has been | ||
129 | substantial support incorporated into klogd for processing protection | ||
130 | faults. In order to have full support for address resolution at least | ||
131 | version 1.3-pl3 of the sysklogd package should be used. | ||
132 | |||
133 | When a protection fault occurs the klogd daemon automatically | ||
134 | translates important addresses in the kernel log messages to their | ||
135 | symbolic equivalents. This translated kernel message is then | ||
136 | forwarded through whatever reporting mechanism klogd is using. The | ||
137 | protection fault message can be simply cut out of the message files | ||
138 | and forwarded to the kernel developers. | ||
139 | |||
140 | Two types of address resolution are performed by klogd. The first is | ||
141 | static translation and the second is dynamic translation. Static | ||
142 | translation uses the System.map file in much the same manner that | ||
143 | ksymoops does. In order to do static translation the klogd daemon | ||
144 | must be able to find a system map file at daemon initialization time. | ||
145 | See the klogd man page for information on how klogd searches for map | ||
146 | files. | ||
147 | |||
148 | Dynamic address translation is important when kernel loadable modules | ||
149 | are being used. Since memory for kernel modules is allocated from the | ||
150 | kernel's dynamic memory pools there are no fixed locations for either | ||
151 | the start of the module or for functions and symbols in the module. | ||
152 | |||
153 | The kernel supports system calls which allow a program to determine | ||
154 | which modules are loaded and their location in memory. Using these | ||
155 | system calls the klogd daemon builds a symbol table which can be used | ||
156 | to debug a protection fault which occurs in a loadable kernel module. | ||
157 | |||
158 | At the very minimum klogd will provide the name of the module which | ||
159 | generated the protection fault. There may be additional symbolic | ||
160 | information available if the developer of the loadable module chose to | ||
161 | export symbol information from the module. | ||
162 | |||
163 | Since the kernel module environment can be dynamic there must be a | ||
164 | mechanism for notifying the klogd daemon when a change in module | ||
165 | environment occurs. There are command line options available which | ||
166 | allow klogd to signal the currently executing daemon that symbol | ||
167 | information should be refreshed. See the klogd manual page for more | ||
168 | information. | ||
169 | |||
170 | A patch is included with the sysklogd distribution which modifies the | ||
171 | modules-2.0.0 package to automatically signal klogd whenever a module | ||
172 | is loaded or unloaded. Applying this patch provides essentially | ||
173 | seamless support for debugging protection faults which occur with | ||
174 | kernel loadable modules. | ||
175 | |||
176 | The following is an example of a protection fault in a loadable module | ||
177 | processed by klogd: | ||
178 | --------------------------------------------------------------------------- | ||
179 | Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc | ||
180 | Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000 | ||
181 | Aug 29 09:51:01 blizard kernel: *pde = 00000000 | ||
182 | Aug 29 09:51:01 blizard kernel: Oops: 0002 | ||
183 | Aug 29 09:51:01 blizard kernel: CPU: 0 | ||
184 | Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868] | ||
185 | Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212 | ||
186 | Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c | ||
187 | Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c | ||
188 | Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018 | ||
189 | Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000) | ||
190 | Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001 | ||
191 | Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00 | ||
192 | Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036 | ||
193 | Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128] | ||
194 | Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3 | ||
195 | --------------------------------------------------------------------------- | ||
196 | |||
197 | Dr. G.W. Wettstein Oncology Research Div. Computing Facility | ||
198 | Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com | ||
199 | 820 4th St. N. | ||
200 | Fargo, ND 58122 | ||
201 | Phone: 701-234-7556 | ||
202 | |||
203 | |||
204 | --------------------------------------------------------------------------- | ||
205 | Tainted kernels: | ||
206 | |||
207 | Some oops reports contain the string 'Tainted: ' after the program | ||
208 | counter, this indicates that the kernel has been tainted by some | ||
209 | mechanism. The string is followed by a series of position sensitive | ||
210 | characters, each representing a particular tainted value. | ||
211 | |||
212 | 1: 'G' if all modules loaded have a GPL or compatible license, 'P' if | ||
213 | any proprietary module has been loaded. Modules without a | ||
214 | MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by | ||
215 | insmod as GPL compatible are assumed to be proprietary. | ||
216 | |||
217 | 2: 'F' if any module was force loaded by insmod -f, ' ' if all | ||
218 | modules were loaded normally. | ||
219 | |||
220 | 3: 'S' if the oops occurred on an SMP kernel running on hardware that | ||
221 | hasn't been certified as safe to run multiprocessor. | ||
222 | Currently this occurs only on various Athlons that are not | ||
223 | SMP capable. | ||
224 | |||
225 | The primary reason for the 'Tainted: ' string is to tell kernel | ||
226 | debuggers if this is a clean kernel or if anything unusual has | ||
227 | occurred. Tainting is permanent, even if an offending module is | ||
228 | unloading the tainted value remains to indicate that the kernel is not | ||
229 | trustworthy. | ||