diff options
author | Randy Dunlap <randy.dunlap@oracle.com> | 2008-03-10 20:16:32 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2008-04-21 00:46:51 -0400 |
commit | 4b5ff469234b8ab5cd05f4a201cbb229896729d0 (patch) | |
tree | dc44c4e82be76ffc00cb981eb4606276fffa7e1e /Documentation/pci-error-recovery.txt | |
parent | 3925e6fc1f774048404fdd910b0345b06c699eb4 (diff) |
PCI: doc/pci: create Documentation/PCI/ and move files into it
Create Documentation/PCI/ and move PCI-related files to it.
Fix a few instances of trailing whitespace.
Update references to the new file locations.
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'Documentation/pci-error-recovery.txt')
-rw-r--r-- | Documentation/pci-error-recovery.txt | 396 |
1 files changed, 0 insertions, 396 deletions
diff --git a/Documentation/pci-error-recovery.txt b/Documentation/pci-error-recovery.txt deleted file mode 100644 index 6650af432523..000000000000 --- a/Documentation/pci-error-recovery.txt +++ /dev/null | |||
@@ -1,396 +0,0 @@ | |||
1 | |||
2 | PCI Error Recovery | ||
3 | ------------------ | ||
4 | February 2, 2006 | ||
5 | |||
6 | Current document maintainer: | ||
7 | Linas Vepstas <linas@austin.ibm.com> | ||
8 | |||
9 | |||
10 | Many PCI bus controllers are able to detect a variety of hardware | ||
11 | PCI errors on the bus, such as parity errors on the data and address | ||
12 | busses, as well as SERR and PERR errors. Some of the more advanced | ||
13 | chipsets are able to deal with these errors; these include PCI-E chipsets, | ||
14 | and the PCI-host bridges found on IBM Power4 and Power5-based pSeries | ||
15 | boxes. A typical action taken is to disconnect the affected device, | ||
16 | halting all I/O to it. The goal of a disconnection is to avoid system | ||
17 | corruption; for example, to halt system memory corruption due to DMA's | ||
18 | to "wild" addresses. Typically, a reconnection mechanism is also | ||
19 | offered, so that the affected PCI device(s) are reset and put back | ||
20 | into working condition. The reset phase requires coordination | ||
21 | between the affected device drivers and the PCI controller chip. | ||
22 | This document describes a generic API for notifying device drivers | ||
23 | of a bus disconnection, and then performing error recovery. | ||
24 | This API is currently implemented in the 2.6.16 and later kernels. | ||
25 | |||
26 | Reporting and recovery is performed in several steps. First, when | ||
27 | a PCI hardware error has resulted in a bus disconnect, that event | ||
28 | is reported as soon as possible to all affected device drivers, | ||
29 | including multiple instances of a device driver on multi-function | ||
30 | cards. This allows device drivers to avoid deadlocking in spinloops, | ||
31 | waiting for some i/o-space register to change, when it never will. | ||
32 | It also gives the drivers a chance to defer incoming I/O as | ||
33 | needed. | ||
34 | |||
35 | Next, recovery is performed in several stages. Most of the complexity | ||
36 | is forced by the need to handle multi-function devices, that is, | ||
37 | devices that have multiple device drivers associated with them. | ||
38 | In the first stage, each driver is allowed to indicate what type | ||
39 | of reset it desires, the choices being a simple re-enabling of I/O | ||
40 | or requesting a hard reset (a full electrical #RST of the PCI card). | ||
41 | If any driver requests a full reset, that is what will be done. | ||
42 | |||
43 | After a full reset and/or a re-enabling of I/O, all drivers are | ||
44 | again notified, so that they may then perform any device setup/config | ||
45 | that may be required. After these have all completed, a final | ||
46 | "resume normal operations" event is sent out. | ||
47 | |||
48 | The biggest reason for choosing a kernel-based implementation rather | ||
49 | than a user-space implementation was the need to deal with bus | ||
50 | disconnects of PCI devices attached to storage media, and, in particular, | ||
51 | disconnects from devices holding the root file system. If the root | ||
52 | file system is disconnected, a user-space mechanism would have to go | ||
53 | through a large number of contortions to complete recovery. Almost all | ||
54 | of the current Linux file systems are not tolerant of disconnection | ||
55 | from/reconnection to their underlying block device. By contrast, | ||
56 | bus errors are easy to manage in the device driver. Indeed, most | ||
57 | device drivers already handle very similar recovery procedures; | ||
58 | for example, the SCSI-generic layer already provides significant | ||
59 | mechanisms for dealing with SCSI bus errors and SCSI bus resets. | ||
60 | |||
61 | |||
62 | Detailed Design | ||
63 | --------------- | ||
64 | Design and implementation details below, based on a chain of | ||
65 | public email discussions with Ben Herrenschmidt, circa 5 April 2005. | ||
66 | |||
67 | The error recovery API support is exposed to the driver in the form of | ||
68 | a structure of function pointers pointed to by a new field in struct | ||
69 | pci_driver. A driver that fails to provide the structure is "non-aware", | ||
70 | and the actual recovery steps taken are platform dependent. The | ||
71 | arch/powerpc implementation will simulate a PCI hotplug remove/add. | ||
72 | |||
73 | This structure has the form: | ||
74 | struct pci_error_handlers | ||
75 | { | ||
76 | int (*error_detected)(struct pci_dev *dev, enum pci_channel_state); | ||
77 | int (*mmio_enabled)(struct pci_dev *dev); | ||
78 | int (*link_reset)(struct pci_dev *dev); | ||
79 | int (*slot_reset)(struct pci_dev *dev); | ||
80 | void (*resume)(struct pci_dev *dev); | ||
81 | }; | ||
82 | |||
83 | The possible channel states are: | ||
84 | enum pci_channel_state { | ||
85 | pci_channel_io_normal, /* I/O channel is in normal state */ | ||
86 | pci_channel_io_frozen, /* I/O to channel is blocked */ | ||
87 | pci_channel_io_perm_failure, /* PCI card is dead */ | ||
88 | }; | ||
89 | |||
90 | Possible return values are: | ||
91 | enum pci_ers_result { | ||
92 | PCI_ERS_RESULT_NONE, /* no result/none/not supported in device driver */ | ||
93 | PCI_ERS_RESULT_CAN_RECOVER, /* Device driver can recover without slot reset */ | ||
94 | PCI_ERS_RESULT_NEED_RESET, /* Device driver wants slot to be reset. */ | ||
95 | PCI_ERS_RESULT_DISCONNECT, /* Device has completely failed, is unrecoverable */ | ||
96 | PCI_ERS_RESULT_RECOVERED, /* Device driver is fully recovered and operational */ | ||
97 | }; | ||
98 | |||
99 | A driver does not have to implement all of these callbacks; however, | ||
100 | if it implements any, it must implement error_detected(). If a callback | ||
101 | is not implemented, the corresponding feature is considered unsupported. | ||
102 | For example, if mmio_enabled() and resume() aren't there, then it | ||
103 | is assumed that the driver is not doing any direct recovery and requires | ||
104 | a reset. If link_reset() is not implemented, the card is assumed as | ||
105 | not care about link resets. Typically a driver will want to know about | ||
106 | a slot_reset(). | ||
107 | |||
108 | The actual steps taken by a platform to recover from a PCI error | ||
109 | event will be platform-dependent, but will follow the general | ||
110 | sequence described below. | ||
111 | |||
112 | STEP 0: Error Event | ||
113 | ------------------- | ||
114 | PCI bus error is detect by the PCI hardware. On powerpc, the slot | ||
115 | is isolated, in that all I/O is blocked: all reads return 0xffffffff, | ||
116 | all writes are ignored. | ||
117 | |||
118 | |||
119 | STEP 1: Notification | ||
120 | -------------------- | ||
121 | Platform calls the error_detected() callback on every instance of | ||
122 | every driver affected by the error. | ||
123 | |||
124 | At this point, the device might not be accessible anymore, depending on | ||
125 | the platform (the slot will be isolated on powerpc). The driver may | ||
126 | already have "noticed" the error because of a failing I/O, but this | ||
127 | is the proper "synchronization point", that is, it gives the driver | ||
128 | a chance to cleanup, waiting for pending stuff (timers, whatever, etc...) | ||
129 | to complete; it can take semaphores, schedule, etc... everything but | ||
130 | touch the device. Within this function and after it returns, the driver | ||
131 | shouldn't do any new IOs. Called in task context. This is sort of a | ||
132 | "quiesce" point. See note about interrupts at the end of this doc. | ||
133 | |||
134 | All drivers participating in this system must implement this call. | ||
135 | The driver must return one of the following result codes: | ||
136 | - PCI_ERS_RESULT_CAN_RECOVER: | ||
137 | Driver returns this if it thinks it might be able to recover | ||
138 | the HW by just banging IOs or if it wants to be given | ||
139 | a chance to extract some diagnostic information (see | ||
140 | mmio_enable, below). | ||
141 | - PCI_ERS_RESULT_NEED_RESET: | ||
142 | Driver returns this if it can't recover without a hard | ||
143 | slot reset. | ||
144 | - PCI_ERS_RESULT_DISCONNECT: | ||
145 | Driver returns this if it doesn't want to recover at all. | ||
146 | |||
147 | The next step taken will depend on the result codes returned by the | ||
148 | drivers. | ||
149 | |||
150 | If all drivers on the segment/slot return PCI_ERS_RESULT_CAN_RECOVER, | ||
151 | then the platform should re-enable IOs on the slot (or do nothing in | ||
152 | particular, if the platform doesn't isolate slots), and recovery | ||
153 | proceeds to STEP 2 (MMIO Enable). | ||
154 | |||
155 | If any driver requested a slot reset (by returning PCI_ERS_RESULT_NEED_RESET), | ||
156 | then recovery proceeds to STEP 4 (Slot Reset). | ||
157 | |||
158 | If the platform is unable to recover the slot, the next step | ||
159 | is STEP 6 (Permanent Failure). | ||
160 | |||
161 | >>> The current powerpc implementation assumes that a device driver will | ||
162 | >>> *not* schedule or semaphore in this routine; the current powerpc | ||
163 | >>> implementation uses one kernel thread to notify all devices; | ||
164 | >>> thus, if one device sleeps/schedules, all devices are affected. | ||
165 | >>> Doing better requires complex multi-threaded logic in the error | ||
166 | >>> recovery implementation (e.g. waiting for all notification threads | ||
167 | >>> to "join" before proceeding with recovery.) This seems excessively | ||
168 | >>> complex and not worth implementing. | ||
169 | |||
170 | >>> The current powerpc implementation doesn't much care if the device | ||
171 | >>> attempts I/O at this point, or not. I/O's will fail, returning | ||
172 | >>> a value of 0xff on read, and writes will be dropped. If the device | ||
173 | >>> driver attempts more than 10K I/O's to a frozen adapter, it will | ||
174 | >>> assume that the device driver has gone into an infinite loop, and | ||
175 | >>> it will panic the kernel. There doesn't seem to be any other | ||
176 | >>> way of stopping a device driver that insists on spinning on I/O. | ||
177 | |||
178 | STEP 2: MMIO Enabled | ||
179 | ------------------- | ||
180 | The platform re-enables MMIO to the device (but typically not the | ||
181 | DMA), and then calls the mmio_enabled() callback on all affected | ||
182 | device drivers. | ||
183 | |||
184 | This is the "early recovery" call. IOs are allowed again, but DMA is | ||
185 | not (hrm... to be discussed, I prefer not), with some restrictions. This | ||
186 | is NOT a callback for the driver to start operations again, only to | ||
187 | peek/poke at the device, extract diagnostic information, if any, and | ||
188 | eventually do things like trigger a device local reset or some such, | ||
189 | but not restart operations. This is callback is made if all drivers on | ||
190 | a segment agree that they can try to recover and if no automatic link reset | ||
191 | was performed by the HW. If the platform can't just re-enable IOs without | ||
192 | a slot reset or a link reset, it wont call this callback, and instead | ||
193 | will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset) | ||
194 | |||
195 | >>> The following is proposed; no platform implements this yet: | ||
196 | >>> Proposal: All I/O's should be done _synchronously_ from within | ||
197 | >>> this callback, errors triggered by them will be returned via | ||
198 | >>> the normal pci_check_whatever() API, no new error_detected() | ||
199 | >>> callback will be issued due to an error happening here. However, | ||
200 | >>> such an error might cause IOs to be re-blocked for the whole | ||
201 | >>> segment, and thus invalidate the recovery that other devices | ||
202 | >>> on the same segment might have done, forcing the whole segment | ||
203 | >>> into one of the next states, that is, link reset or slot reset. | ||
204 | |||
205 | The driver should return one of the following result codes: | ||
206 | - PCI_ERS_RESULT_RECOVERED | ||
207 | Driver returns this if it thinks the device is fully | ||
208 | functional and thinks it is ready to start | ||
209 | normal driver operations again. There is no | ||
210 | guarantee that the driver will actually be | ||
211 | allowed to proceed, as another driver on the | ||
212 | same segment might have failed and thus triggered a | ||
213 | slot reset on platforms that support it. | ||
214 | |||
215 | - PCI_ERS_RESULT_NEED_RESET | ||
216 | Driver returns this if it thinks the device is not | ||
217 | recoverable in it's current state and it needs a slot | ||
218 | reset to proceed. | ||
219 | |||
220 | - PCI_ERS_RESULT_DISCONNECT | ||
221 | Same as above. Total failure, no recovery even after | ||
222 | reset driver dead. (To be defined more precisely) | ||
223 | |||
224 | The next step taken depends on the results returned by the drivers. | ||
225 | If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform | ||
226 | proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations). | ||
227 | |||
228 | If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform | ||
229 | proceeds to STEP 4 (Slot Reset) | ||
230 | |||
231 | >>> The current powerpc implementation does not implement this callback. | ||
232 | |||
233 | |||
234 | STEP 3: Link Reset | ||
235 | ------------------ | ||
236 | The platform resets the link, and then calls the link_reset() callback | ||
237 | on all affected device drivers. This is a PCI-Express specific state | ||
238 | and is done whenever a non-fatal error has been detected that can be | ||
239 | "solved" by resetting the link. This call informs the driver of the | ||
240 | reset and the driver should check to see if the device appears to be | ||
241 | in working condition. | ||
242 | |||
243 | The driver is not supposed to restart normal driver I/O operations | ||
244 | at this point. It should limit itself to "probing" the device to | ||
245 | check it's recoverability status. If all is right, then the platform | ||
246 | will call resume() once all drivers have ack'd link_reset(). | ||
247 | |||
248 | Result codes: | ||
249 | (identical to STEP 3 (MMIO Enabled) | ||
250 | |||
251 | The platform then proceeds to either STEP 4 (Slot Reset) or STEP 5 | ||
252 | (Resume Operations). | ||
253 | |||
254 | >>> The current powerpc implementation does not implement this callback. | ||
255 | |||
256 | |||
257 | STEP 4: Slot Reset | ||
258 | ------------------ | ||
259 | The platform performs a soft or hard reset of the device, and then | ||
260 | calls the slot_reset() callback. | ||
261 | |||
262 | A soft reset consists of asserting the adapter #RST line and then | ||
263 | restoring the PCI BAR's and PCI configuration header to a state | ||
264 | that is equivalent to what it would be after a fresh system | ||
265 | power-on followed by power-on BIOS/system firmware initialization. | ||
266 | If the platform supports PCI hotplug, then the reset might be | ||
267 | performed by toggling the slot electrical power off/on. | ||
268 | |||
269 | It is important for the platform to restore the PCI config space | ||
270 | to the "fresh poweron" state, rather than the "last state". After | ||
271 | a slot reset, the device driver will almost always use its standard | ||
272 | device initialization routines, and an unusual config space setup | ||
273 | may result in hung devices, kernel panics, or silent data corruption. | ||
274 | |||
275 | This call gives drivers the chance to re-initialize the hardware | ||
276 | (re-download firmware, etc.). At this point, the driver may assume | ||
277 | that he card is in a fresh state and is fully functional. In | ||
278 | particular, interrupt generation should work normally. | ||
279 | |||
280 | Drivers should not yet restart normal I/O processing operations | ||
281 | at this point. If all device drivers report success on this | ||
282 | callback, the platform will call resume() to complete the sequence, | ||
283 | and let the driver restart normal I/O processing. | ||
284 | |||
285 | A driver can still return a critical failure for this function if | ||
286 | it can't get the device operational after reset. If the platform | ||
287 | previously tried a soft reset, it might now try a hard reset (power | ||
288 | cycle) and then call slot_reset() again. It the device still can't | ||
289 | be recovered, there is nothing more that can be done; the platform | ||
290 | will typically report a "permanent failure" in such a case. The | ||
291 | device will be considered "dead" in this case. | ||
292 | |||
293 | Drivers for multi-function cards will need to coordinate among | ||
294 | themselves as to which driver instance will perform any "one-shot" | ||
295 | or global device initialization. For example, the Symbios sym53cxx2 | ||
296 | driver performs device init only from PCI function 0: | ||
297 | |||
298 | + if (PCI_FUNC(pdev->devfn) == 0) | ||
299 | + sym_reset_scsi_bus(np, 0); | ||
300 | |||
301 | Result codes: | ||
302 | - PCI_ERS_RESULT_DISCONNECT | ||
303 | Same as above. | ||
304 | |||
305 | Platform proceeds either to STEP 5 (Resume Operations) or STEP 6 (Permanent | ||
306 | Failure). | ||
307 | |||
308 | >>> The current powerpc implementation does not currently try a | ||
309 | >>> power-cycle reset if the driver returned PCI_ERS_RESULT_DISCONNECT. | ||
310 | >>> However, it probably should. | ||
311 | |||
312 | |||
313 | STEP 5: Resume Operations | ||
314 | ------------------------- | ||
315 | The platform will call the resume() callback on all affected device | ||
316 | drivers if all drivers on the segment have returned | ||
317 | PCI_ERS_RESULT_RECOVERED from one of the 3 previous callbacks. | ||
318 | The goal of this callback is to tell the driver to restart activity, | ||
319 | that everything is back and running. This callback does not return | ||
320 | a result code. | ||
321 | |||
322 | At this point, if a new error happens, the platform will restart | ||
323 | a new error recovery sequence. | ||
324 | |||
325 | STEP 6: Permanent Failure | ||
326 | ------------------------- | ||
327 | A "permanent failure" has occurred, and the platform cannot recover | ||
328 | the device. The platform will call error_detected() with a | ||
329 | pci_channel_state value of pci_channel_io_perm_failure. | ||
330 | |||
331 | The device driver should, at this point, assume the worst. It should | ||
332 | cancel all pending I/O, refuse all new I/O, returning -EIO to | ||
333 | higher layers. The device driver should then clean up all of its | ||
334 | memory and remove itself from kernel operations, much as it would | ||
335 | during system shutdown. | ||
336 | |||
337 | The platform will typically notify the system operator of the | ||
338 | permanent failure in some way. If the device is hotplug-capable, | ||
339 | the operator will probably want to remove and replace the device. | ||
340 | Note, however, not all failures are truly "permanent". Some are | ||
341 | caused by over-heating, some by a poorly seated card. Many | ||
342 | PCI error events are caused by software bugs, e.g. DMA's to | ||
343 | wild addresses or bogus split transactions due to programming | ||
344 | errors. See the discussion in powerpc/eeh-pci-error-recovery.txt | ||
345 | for additional detail on real-life experience of the causes of | ||
346 | software errors. | ||
347 | |||
348 | |||
349 | Conclusion; General Remarks | ||
350 | --------------------------- | ||
351 | The way those callbacks are called is platform policy. A platform with | ||
352 | no slot reset capability may want to just "ignore" drivers that can't | ||
353 | recover (disconnect them) and try to let other cards on the same segment | ||
354 | recover. Keep in mind that in most real life cases, though, there will | ||
355 | be only one driver per segment. | ||
356 | |||
357 | Now, a note about interrupts. If you get an interrupt and your | ||
358 | device is dead or has been isolated, there is a problem :) | ||
359 | The current policy is to turn this into a platform policy. | ||
360 | That is, the recovery API only requires that: | ||
361 | |||
362 | - There is no guarantee that interrupt delivery can proceed from any | ||
363 | device on the segment starting from the error detection and until the | ||
364 | resume callback is sent, at which point interrupts are expected to be | ||
365 | fully operational. | ||
366 | |||
367 | - There is no guarantee that interrupt delivery is stopped, that is, | ||
368 | a driver that gets an interrupt after detecting an error, or that detects | ||
369 | an error within the interrupt handler such that it prevents proper | ||
370 | ack'ing of the interrupt (and thus removal of the source) should just | ||
371 | return IRQ_NOTHANDLED. It's up to the platform to deal with that | ||
372 | condition, typically by masking the IRQ source during the duration of | ||
373 | the error handling. It is expected that the platform "knows" which | ||
374 | interrupts are routed to error-management capable slots and can deal | ||
375 | with temporarily disabling that IRQ number during error processing (this | ||
376 | isn't terribly complex). That means some IRQ latency for other devices | ||
377 | sharing the interrupt, but there is simply no other way. High end | ||
378 | platforms aren't supposed to share interrupts between many devices | ||
379 | anyway :) | ||
380 | |||
381 | >>> Implementation details for the powerpc platform are discussed in | ||
382 | >>> the file Documentation/powerpc/eeh-pci-error-recovery.txt | ||
383 | |||
384 | >>> As of this writing, there are six device drivers with patches | ||
385 | >>> implementing error recovery. Not all of these patches are in | ||
386 | >>> mainline yet. These may be used as "examples": | ||
387 | >>> | ||
388 | >>> drivers/scsi/ipr.c | ||
389 | >>> drivers/scsi/sym53cxx_2 | ||
390 | >>> drivers/next/e100.c | ||
391 | >>> drivers/net/e1000 | ||
392 | >>> drivers/net/ixgb | ||
393 | >>> drivers/net/s2io.c | ||
394 | |||
395 | The End | ||
396 | ------- | ||