aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/PCI
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/PCI')
-rw-r--r--Documentation/PCI/00-INDEX12
-rw-r--r--Documentation/PCI/PCIEBUS-HOWTO.txt217
-rw-r--r--Documentation/PCI/pci-error-recovery.txt396
-rw-r--r--Documentation/PCI/pci.txt646
-rw-r--r--Documentation/PCI/pcieaer-howto.txt253
5 files changed, 1524 insertions, 0 deletions
diff --git a/Documentation/PCI/00-INDEX b/Documentation/PCI/00-INDEX
new file mode 100644
index 000000000000..49f43946c6b6
--- /dev/null
+++ b/Documentation/PCI/00-INDEX
@@ -0,0 +1,12 @@
100-INDEX
2 - this file
3PCI-DMA-mapping.txt
4 - info for PCI drivers using DMA portably across all platforms
5PCIEBUS-HOWTO.txt
6 - a guide describing the PCI Express Port Bus driver
7pci-error-recovery.txt
8 - info on PCI error recovery
9pci.txt
10 - info on the PCI subsystem for device driver authors
11pcieaer-howto.txt
12 - the PCI Express Advanced Error Reporting Driver Guide HOWTO
diff --git a/Documentation/PCI/PCIEBUS-HOWTO.txt b/Documentation/PCI/PCIEBUS-HOWTO.txt
new file mode 100644
index 000000000000..9a07e38631b0
--- /dev/null
+++ b/Documentation/PCI/PCIEBUS-HOWTO.txt
@@ -0,0 +1,217 @@
1 The PCI Express Port Bus Driver Guide HOWTO
2 Tom L Nguyen tom.l.nguyen@intel.com
3 11/03/2004
4
51. About this guide
6
7This guide describes the basics of the PCI Express Port Bus driver
8and provides information on how to enable the service drivers to
9register/unregister with the PCI Express Port Bus Driver.
10
112. Copyright 2004 Intel Corporation
12
133. What is the PCI Express Port Bus Driver
14
15A PCI Express Port is a logical PCI-PCI Bridge structure. There
16are two types of PCI Express Port: the Root Port and the Switch
17Port. The Root Port originates a PCI Express link from a PCI Express
18Root Complex and the Switch Port connects PCI Express links to
19internal logical PCI buses. The Switch Port, which has its secondary
20bus representing the switch's internal routing logic, is called the
21switch's Upstream Port. The switch's Downstream Port is bridging from
22switch's internal routing bus to a bus representing the downstream
23PCI Express link from the PCI Express Switch.
24
25A PCI Express Port can provide up to four distinct functions,
26referred to in this document as services, depending on its port type.
27PCI Express Port's services include native hotplug support (HP),
28power management event support (PME), advanced error reporting
29support (AER), and virtual channel support (VC). These services may
30be handled by a single complex driver or be individually distributed
31and handled by corresponding service drivers.
32
334. Why use the PCI Express Port Bus Driver?
34
35In existing Linux kernels, the Linux Device Driver Model allows a
36physical device to be handled by only a single driver. The PCI
37Express Port is a PCI-PCI Bridge device with multiple distinct
38services. To maintain a clean and simple solution each service
39may have its own software service driver. In this case several
40service drivers will compete for a single PCI-PCI Bridge device.
41For example, if the PCI Express Root Port native hotplug service
42driver is loaded first, it claims a PCI-PCI Bridge Root Port. The
43kernel therefore does not load other service drivers for that Root
44Port. In other words, it is impossible to have multiple service
45drivers load and run on a PCI-PCI Bridge device simultaneously
46using the current driver model.
47
48To enable multiple service drivers running simultaneously requires
49having a PCI Express Port Bus driver, which manages all populated
50PCI Express Ports and distributes all provided service requests
51to the corresponding service drivers as required. Some key
52advantages of using the PCI Express Port Bus driver are listed below:
53
54 - Allow multiple service drivers to run simultaneously on
55 a PCI-PCI Bridge Port device.
56
57 - Allow service drivers implemented in an independent
58 staged approach.
59
60 - Allow one service driver to run on multiple PCI-PCI Bridge
61 Port devices.
62
63 - Manage and distribute resources of a PCI-PCI Bridge Port
64 device to requested service drivers.
65
665. Configuring the PCI Express Port Bus Driver vs. Service Drivers
67
685.1 Including the PCI Express Port Bus Driver Support into the Kernel
69
70Including the PCI Express Port Bus driver depends on whether the PCI
71Express support is included in the kernel config. The kernel will
72automatically include the PCI Express Port Bus driver as a kernel
73driver when the PCI Express support is enabled in the kernel.
74
755.2 Enabling Service Driver Support
76
77PCI device drivers are implemented based on Linux Device Driver Model.
78All service drivers are PCI device drivers. As discussed above, it is
79impossible to load any service driver once the kernel has loaded the
80PCI Express Port Bus Driver. To meet the PCI Express Port Bus Driver
81Model requires some minimal changes on existing service drivers that
82imposes no impact on the functionality of existing service drivers.
83
84A service driver is required to use the two APIs shown below to
85register its service with the PCI Express Port Bus driver (see
86section 5.2.1 & 5.2.2). It is important that a service driver
87initializes the pcie_port_service_driver data structure, included in
88header file /include/linux/pcieport_if.h, before calling these APIs.
89Failure to do so will result an identity mismatch, which prevents
90the PCI Express Port Bus driver from loading a service driver.
91
925.2.1 pcie_port_service_register
93
94int pcie_port_service_register(struct pcie_port_service_driver *new)
95
96This API replaces the Linux Driver Model's pci_module_init API. A
97service driver should always calls pcie_port_service_register at
98module init. Note that after service driver being loaded, calls
99such as pci_enable_device(dev) and pci_set_master(dev) are no longer
100necessary since these calls are executed by the PCI Port Bus driver.
101
1025.2.2 pcie_port_service_unregister
103
104void pcie_port_service_unregister(struct pcie_port_service_driver *new)
105
106pcie_port_service_unregister replaces the Linux Driver Model's
107pci_unregister_driver. It's always called by service driver when a
108module exits.
109
1105.2.3 Sample Code
111
112Below is sample service driver code to initialize the port service
113driver data structure.
114
115static struct pcie_port_service_id service_id[] = { {
116 .vendor = PCI_ANY_ID,
117 .device = PCI_ANY_ID,
118 .port_type = PCIE_RC_PORT,
119 .service_type = PCIE_PORT_SERVICE_AER,
120 }, { /* end: all zeroes */ }
121};
122
123static struct pcie_port_service_driver root_aerdrv = {
124 .name = (char *)device_name,
125 .id_table = &service_id[0],
126
127 .probe = aerdrv_load,
128 .remove = aerdrv_unload,
129
130 .suspend = aerdrv_suspend,
131 .resume = aerdrv_resume,
132};
133
134Below is a sample code for registering/unregistering a service
135driver.
136
137static int __init aerdrv_service_init(void)
138{
139 int retval = 0;
140
141 retval = pcie_port_service_register(&root_aerdrv);
142 if (!retval) {
143 /*
144 * FIX ME
145 */
146 }
147 return retval;
148}
149
150static void __exit aerdrv_service_exit(void)
151{
152 pcie_port_service_unregister(&root_aerdrv);
153}
154
155module_init(aerdrv_service_init);
156module_exit(aerdrv_service_exit);
157
1586. Possible Resource Conflicts
159
160Since all service drivers of a PCI-PCI Bridge Port device are
161allowed to run simultaneously, below lists a few of possible resource
162conflicts with proposed solutions.
163
1646.1 MSI Vector Resource
165
166The MSI capability structure enables a device software driver to call
167pci_enable_msi to request MSI based interrupts. Once MSI interrupts
168are enabled on a device, it stays in this mode until a device driver
169calls pci_disable_msi to disable MSI interrupts and revert back to
170INTx emulation mode. Since service drivers of the same PCI-PCI Bridge
171port share the same physical device, if an individual service driver
172calls pci_enable_msi/pci_disable_msi it may result unpredictable
173behavior. For example, two service drivers run simultaneously on the
174same physical Root Port. Both service drivers call pci_enable_msi to
175request MSI based interrupts. A service driver may not know whether
176any other service drivers have run on this Root Port. If either one
177of them calls pci_disable_msi, it puts the other service driver
178in a wrong interrupt mode.
179
180To avoid this situation all service drivers are not permitted to
181switch interrupt mode on its device. The PCI Express Port Bus driver
182is responsible for determining the interrupt mode and this should be
183transparent to service drivers. Service drivers need to know only
184the vector IRQ assigned to the field irq of struct pcie_device, which
185is passed in when the PCI Express Port Bus driver probes each service
186driver. Service drivers should use (struct pcie_device*)dev->irq to
187call request_irq/free_irq. In addition, the interrupt mode is stored
188in the field interrupt_mode of struct pcie_device.
189
1906.2 MSI-X Vector Resources
191
192Similar to the MSI a device driver for an MSI-X capable device can
193call pci_enable_msix to request MSI-X interrupts. All service drivers
194are not permitted to switch interrupt mode on its device. The PCI
195Express Port Bus driver is responsible for determining the interrupt
196mode and this should be transparent to service drivers. Any attempt
197by service driver to call pci_enable_msix/pci_disable_msix may
198result unpredictable behavior. Service drivers should use
199(struct pcie_device*)dev->irq and call request_irq/free_irq.
200
2016.3 PCI Memory/IO Mapped Regions
202
203Service drivers for PCI Express Power Management (PME), Advanced
204Error Reporting (AER), Hot-Plug (HP) and Virtual Channel (VC) access
205PCI configuration space on the PCI Express port. In all cases the
206registers accessed are independent of each other. This patch assumes
207that all service drivers will be well behaved and not overwrite
208other service driver's configuration settings.
209
2106.4 PCI Config Registers
211
212Each service driver runs its PCI config operations on its own
213capability structure except the PCI Express capability structure, in
214which Root Control register and Device Control register are shared
215between PME and AER. This patch assumes that all service drivers
216will be well behaved and not overwrite other service driver's
217configuration settings.
diff --git a/Documentation/PCI/pci-error-recovery.txt b/Documentation/PCI/pci-error-recovery.txt
new file mode 100644
index 000000000000..6650af432523
--- /dev/null
+++ b/Documentation/PCI/pci-error-recovery.txt
@@ -0,0 +1,396 @@
1
2 PCI Error Recovery
3 ------------------
4 February 2, 2006
5
6 Current document maintainer:
7 Linas Vepstas <linas@austin.ibm.com>
8
9
10Many PCI bus controllers are able to detect a variety of hardware
11PCI errors on the bus, such as parity errors on the data and address
12busses, as well as SERR and PERR errors. Some of the more advanced
13chipsets are able to deal with these errors; these include PCI-E chipsets,
14and the PCI-host bridges found on IBM Power4 and Power5-based pSeries
15boxes. A typical action taken is to disconnect the affected device,
16halting all I/O to it. The goal of a disconnection is to avoid system
17corruption; for example, to halt system memory corruption due to DMA's
18to "wild" addresses. Typically, a reconnection mechanism is also
19offered, so that the affected PCI device(s) are reset and put back
20into working condition. The reset phase requires coordination
21between the affected device drivers and the PCI controller chip.
22This document describes a generic API for notifying device drivers
23of a bus disconnection, and then performing error recovery.
24This API is currently implemented in the 2.6.16 and later kernels.
25
26Reporting and recovery is performed in several steps. First, when
27a PCI hardware error has resulted in a bus disconnect, that event
28is reported as soon as possible to all affected device drivers,
29including multiple instances of a device driver on multi-function
30cards. This allows device drivers to avoid deadlocking in spinloops,
31waiting for some i/o-space register to change, when it never will.
32It also gives the drivers a chance to defer incoming I/O as
33needed.
34
35Next, recovery is performed in several stages. Most of the complexity
36is forced by the need to handle multi-function devices, that is,
37devices that have multiple device drivers associated with them.
38In the first stage, each driver is allowed to indicate what type
39of reset it desires, the choices being a simple re-enabling of I/O
40or requesting a hard reset (a full electrical #RST of the PCI card).
41If any driver requests a full reset, that is what will be done.
42
43After a full reset and/or a re-enabling of I/O, all drivers are
44again notified, so that they may then perform any device setup/config
45that may be required. After these have all completed, a final
46"resume normal operations" event is sent out.
47
48The biggest reason for choosing a kernel-based implementation rather
49than a user-space implementation was the need to deal with bus
50disconnects of PCI devices attached to storage media, and, in particular,
51disconnects from devices holding the root file system. If the root
52file system is disconnected, a user-space mechanism would have to go
53through a large number of contortions to complete recovery. Almost all
54of the current Linux file systems are not tolerant of disconnection
55from/reconnection to their underlying block device. By contrast,
56bus errors are easy to manage in the device driver. Indeed, most
57device drivers already handle very similar recovery procedures;
58for example, the SCSI-generic layer already provides significant
59mechanisms for dealing with SCSI bus errors and SCSI bus resets.
60
61
62Detailed Design
63---------------
64Design and implementation details below, based on a chain of
65public email discussions with Ben Herrenschmidt, circa 5 April 2005.
66
67The error recovery API support is exposed to the driver in the form of
68a structure of function pointers pointed to by a new field in struct
69pci_driver. A driver that fails to provide the structure is "non-aware",
70and the actual recovery steps taken are platform dependent. The
71arch/powerpc implementation will simulate a PCI hotplug remove/add.
72
73This structure has the form:
74struct pci_error_handlers
75{
76 int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
77 int (*mmio_enabled)(struct pci_dev *dev);
78 int (*link_reset)(struct pci_dev *dev);
79 int (*slot_reset)(struct pci_dev *dev);
80 void (*resume)(struct pci_dev *dev);
81};
82
83The possible channel states are:
84enum pci_channel_state {
85 pci_channel_io_normal, /* I/O channel is in normal state */
86 pci_channel_io_frozen, /* I/O to channel is blocked */
87 pci_channel_io_perm_failure, /* PCI card is dead */
88};
89
90Possible return values are:
91enum pci_ers_result {
92 PCI_ERS_RESULT_NONE, /* no result/none/not supported in device driver */
93 PCI_ERS_RESULT_CAN_RECOVER, /* Device driver can recover without slot reset */
94 PCI_ERS_RESULT_NEED_RESET, /* Device driver wants slot to be reset. */
95 PCI_ERS_RESULT_DISCONNECT, /* Device has completely failed, is unrecoverable */
96 PCI_ERS_RESULT_RECOVERED, /* Device driver is fully recovered and operational */
97};
98
99A driver does not have to implement all of these callbacks; however,
100if it implements any, it must implement error_detected(). If a callback
101is not implemented, the corresponding feature is considered unsupported.
102For example, if mmio_enabled() and resume() aren't there, then it
103is assumed that the driver is not doing any direct recovery and requires
104a reset. If link_reset() is not implemented, the card is assumed as
105not care about link resets. Typically a driver will want to know about
106a slot_reset().
107
108The actual steps taken by a platform to recover from a PCI error
109event will be platform-dependent, but will follow the general
110sequence described below.
111
112STEP 0: Error Event
113-------------------
114PCI bus error is detect by the PCI hardware. On powerpc, the slot
115is isolated, in that all I/O is blocked: all reads return 0xffffffff,
116all writes are ignored.
117
118
119STEP 1: Notification
120--------------------
121Platform calls the error_detected() callback on every instance of
122every driver affected by the error.
123
124At this point, the device might not be accessible anymore, depending on
125the platform (the slot will be isolated on powerpc). The driver may
126already have "noticed" the error because of a failing I/O, but this
127is the proper "synchronization point", that is, it gives the driver
128a chance to cleanup, waiting for pending stuff (timers, whatever, etc...)
129to complete; it can take semaphores, schedule, etc... everything but
130touch the device. Within this function and after it returns, the driver
131shouldn't do any new IOs. Called in task context. This is sort of a
132"quiesce" point. See note about interrupts at the end of this doc.
133
134All drivers participating in this system must implement this call.
135The driver must return one of the following result codes:
136 - PCI_ERS_RESULT_CAN_RECOVER:
137 Driver returns this if it thinks it might be able to recover
138 the HW by just banging IOs or if it wants to be given
139 a chance to extract some diagnostic information (see
140 mmio_enable, below).
141 - PCI_ERS_RESULT_NEED_RESET:
142 Driver returns this if it can't recover without a hard
143 slot reset.
144 - PCI_ERS_RESULT_DISCONNECT:
145 Driver returns this if it doesn't want to recover at all.
146
147The next step taken will depend on the result codes returned by the
148drivers.
149
150If all drivers on the segment/slot return PCI_ERS_RESULT_CAN_RECOVER,
151then the platform should re-enable IOs on the slot (or do nothing in
152particular, if the platform doesn't isolate slots), and recovery
153proceeds to STEP 2 (MMIO Enable).
154
155If any driver requested a slot reset (by returning PCI_ERS_RESULT_NEED_RESET),
156then recovery proceeds to STEP 4 (Slot Reset).
157
158If the platform is unable to recover the slot, the next step
159is STEP 6 (Permanent Failure).
160
161>>> The current powerpc implementation assumes that a device driver will
162>>> *not* schedule or semaphore in this routine; the current powerpc
163>>> implementation uses one kernel thread to notify all devices;
164>>> thus, if one device sleeps/schedules, all devices are affected.
165>>> Doing better requires complex multi-threaded logic in the error
166>>> recovery implementation (e.g. waiting for all notification threads
167>>> to "join" before proceeding with recovery.) This seems excessively
168>>> complex and not worth implementing.
169
170>>> The current powerpc implementation doesn't much care if the device
171>>> attempts I/O at this point, or not. I/O's will fail, returning
172>>> a value of 0xff on read, and writes will be dropped. If the device
173>>> driver attempts more than 10K I/O's to a frozen adapter, it will
174>>> assume that the device driver has gone into an infinite loop, and
175>>> it will panic the kernel. There doesn't seem to be any other
176>>> way of stopping a device driver that insists on spinning on I/O.
177
178STEP 2: MMIO Enabled
179-------------------
180The platform re-enables MMIO to the device (but typically not the
181DMA), and then calls the mmio_enabled() callback on all affected
182device drivers.
183
184This is the "early recovery" call. IOs are allowed again, but DMA is
185not (hrm... to be discussed, I prefer not), with some restrictions. This
186is NOT a callback for the driver to start operations again, only to
187peek/poke at the device, extract diagnostic information, if any, and
188eventually do things like trigger a device local reset or some such,
189but not restart operations. This is callback is made if all drivers on
190a segment agree that they can try to recover and if no automatic link reset
191was performed by the HW. If the platform can't just re-enable IOs without
192a slot reset or a link reset, it wont call this callback, and instead
193will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset)
194
195>>> The following is proposed; no platform implements this yet:
196>>> Proposal: All I/O's should be done _synchronously_ from within
197>>> this callback, errors triggered by them will be returned via
198>>> the normal pci_check_whatever() API, no new error_detected()
199>>> callback will be issued due to an error happening here. However,
200>>> such an error might cause IOs to be re-blocked for the whole
201>>> segment, and thus invalidate the recovery that other devices
202>>> on the same segment might have done, forcing the whole segment
203>>> into one of the next states, that is, link reset or slot reset.
204
205The driver should return one of the following result codes:
206 - PCI_ERS_RESULT_RECOVERED
207 Driver returns this if it thinks the device is fully
208 functional and thinks it is ready to start
209 normal driver operations again. There is no
210 guarantee that the driver will actually be
211 allowed to proceed, as another driver on the
212 same segment might have failed and thus triggered a
213 slot reset on platforms that support it.
214
215 - PCI_ERS_RESULT_NEED_RESET
216 Driver returns this if it thinks the device is not
217 recoverable in it's current state and it needs a slot
218 reset to proceed.
219
220 - PCI_ERS_RESULT_DISCONNECT
221 Same as above. Total failure, no recovery even after
222 reset driver dead. (To be defined more precisely)
223
224The next step taken depends on the results returned by the drivers.
225If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform
226proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
227
228If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
229proceeds to STEP 4 (Slot Reset)
230
231>>> The current powerpc implementation does not implement this callback.
232
233
234STEP 3: Link Reset
235------------------
236The platform resets the link, and then calls the link_reset() callback
237on all affected device drivers. This is a PCI-Express specific state
238and is done whenever a non-fatal error has been detected that can be
239"solved" by resetting the link. This call informs the driver of the
240reset and the driver should check to see if the device appears to be
241in working condition.
242
243The driver is not supposed to restart normal driver I/O operations
244at this point. It should limit itself to "probing" the device to
245check it's recoverability status. If all is right, then the platform
246will call resume() once all drivers have ack'd link_reset().
247
248 Result codes:
249 (identical to STEP 3 (MMIO Enabled)
250
251The platform then proceeds to either STEP 4 (Slot Reset) or STEP 5
252(Resume Operations).
253
254>>> The current powerpc implementation does not implement this callback.
255
256
257STEP 4: Slot Reset
258------------------
259The platform performs a soft or hard reset of the device, and then
260calls the slot_reset() callback.
261
262A soft reset consists of asserting the adapter #RST line and then
263restoring the PCI BAR's and PCI configuration header to a state
264that is equivalent to what it would be after a fresh system
265power-on followed by power-on BIOS/system firmware initialization.
266If the platform supports PCI hotplug, then the reset might be
267performed by toggling the slot electrical power off/on.
268
269It is important for the platform to restore the PCI config space
270to the "fresh poweron" state, rather than the "last state". After
271a slot reset, the device driver will almost always use its standard
272device initialization routines, and an unusual config space setup
273may result in hung devices, kernel panics, or silent data corruption.
274
275This call gives drivers the chance to re-initialize the hardware
276(re-download firmware, etc.). At this point, the driver may assume
277that he card is in a fresh state and is fully functional. In
278particular, interrupt generation should work normally.
279
280Drivers should not yet restart normal I/O processing operations
281at this point. If all device drivers report success on this
282callback, the platform will call resume() to complete the sequence,
283and let the driver restart normal I/O processing.
284
285A driver can still return a critical failure for this function if
286it can't get the device operational after reset. If the platform
287previously tried a soft reset, it might now try a hard reset (power
288cycle) and then call slot_reset() again. It the device still can't
289be recovered, there is nothing more that can be done; the platform
290will typically report a "permanent failure" in such a case. The
291device will be considered "dead" in this case.
292
293Drivers for multi-function cards will need to coordinate among
294themselves as to which driver instance will perform any "one-shot"
295or global device initialization. For example, the Symbios sym53cxx2
296driver performs device init only from PCI function 0:
297
298+ if (PCI_FUNC(pdev->devfn) == 0)
299+ sym_reset_scsi_bus(np, 0);
300
301 Result codes:
302 - PCI_ERS_RESULT_DISCONNECT
303 Same as above.
304
305Platform proceeds either to STEP 5 (Resume Operations) or STEP 6 (Permanent
306Failure).
307
308>>> The current powerpc implementation does not currently try a
309>>> power-cycle reset if the driver returned PCI_ERS_RESULT_DISCONNECT.
310>>> However, it probably should.
311
312
313STEP 5: Resume Operations
314-------------------------
315The platform will call the resume() callback on all affected device
316drivers if all drivers on the segment have returned
317PCI_ERS_RESULT_RECOVERED from one of the 3 previous callbacks.
318The goal of this callback is to tell the driver to restart activity,
319that everything is back and running. This callback does not return
320a result code.
321
322At this point, if a new error happens, the platform will restart
323a new error recovery sequence.
324
325STEP 6: Permanent Failure
326-------------------------
327A "permanent failure" has occurred, and the platform cannot recover
328the device. The platform will call error_detected() with a
329pci_channel_state value of pci_channel_io_perm_failure.
330
331The device driver should, at this point, assume the worst. It should
332cancel all pending I/O, refuse all new I/O, returning -EIO to
333higher layers. The device driver should then clean up all of its
334memory and remove itself from kernel operations, much as it would
335during system shutdown.
336
337The platform will typically notify the system operator of the
338permanent failure in some way. If the device is hotplug-capable,
339the operator will probably want to remove and replace the device.
340Note, however, not all failures are truly "permanent". Some are
341caused by over-heating, some by a poorly seated card. Many
342PCI error events are caused by software bugs, e.g. DMA's to
343wild addresses or bogus split transactions due to programming
344errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
345for additional detail on real-life experience of the causes of
346software errors.
347
348
349Conclusion; General Remarks
350---------------------------
351The way those callbacks are called is platform policy. A platform with
352no slot reset capability may want to just "ignore" drivers that can't
353recover (disconnect them) and try to let other cards on the same segment
354recover. Keep in mind that in most real life cases, though, there will
355be only one driver per segment.
356
357Now, a note about interrupts. If you get an interrupt and your
358device is dead or has been isolated, there is a problem :)
359The current policy is to turn this into a platform policy.
360That is, the recovery API only requires that:
361
362 - There is no guarantee that interrupt delivery can proceed from any
363device on the segment starting from the error detection and until the
364resume callback is sent, at which point interrupts are expected to be
365fully operational.
366
367 - There is no guarantee that interrupt delivery is stopped, that is,
368a driver that gets an interrupt after detecting an error, or that detects
369an error within the interrupt handler such that it prevents proper
370ack'ing of the interrupt (and thus removal of the source) should just
371return IRQ_NOTHANDLED. It's up to the platform to deal with that
372condition, typically by masking the IRQ source during the duration of
373the error handling. It is expected that the platform "knows" which
374interrupts are routed to error-management capable slots and can deal
375with temporarily disabling that IRQ number during error processing (this
376isn't terribly complex). That means some IRQ latency for other devices
377sharing the interrupt, but there is simply no other way. High end
378platforms aren't supposed to share interrupts between many devices
379anyway :)
380
381>>> Implementation details for the powerpc platform are discussed in
382>>> the file Documentation/powerpc/eeh-pci-error-recovery.txt
383
384>>> As of this writing, there are six device drivers with patches
385>>> implementing error recovery. Not all of these patches are in
386>>> mainline yet. These may be used as "examples":
387>>>
388>>> drivers/scsi/ipr.c
389>>> drivers/scsi/sym53cxx_2
390>>> drivers/next/e100.c
391>>> drivers/net/e1000
392>>> drivers/net/ixgb
393>>> drivers/net/s2io.c
394
395The End
396-------
diff --git a/Documentation/PCI/pci.txt b/Documentation/PCI/pci.txt
new file mode 100644
index 000000000000..8d4dc6250c58
--- /dev/null
+++ b/Documentation/PCI/pci.txt
@@ -0,0 +1,646 @@
1
2 How To Write Linux PCI Drivers
3
4 by Martin Mares <mj@ucw.cz> on 07-Feb-2000
5 updated by Grant Grundler <grundler@parisc-linux.org> on 23-Dec-2006
6
7~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
8The world of PCI is vast and full of (mostly unpleasant) surprises.
9Since each CPU architecture implements different chip-sets and PCI devices
10have different requirements (erm, "features"), the result is the PCI support
11in the Linux kernel is not as trivial as one would wish. This short paper
12tries to introduce all potential driver authors to Linux APIs for
13PCI device drivers.
14
15A more complete resource is the third edition of "Linux Device Drivers"
16by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman.
17LDD3 is available for free (under Creative Commons License) from:
18
19 http://lwn.net/Kernel/LDD3/
20
21However, keep in mind that all documents are subject to "bit rot".
22Refer to the source code if things are not working as described here.
23
24Please send questions/comments/patches about Linux PCI API to the
25"Linux PCI" <linux-pci@atrey.karlin.mff.cuni.cz> mailing list.
26
27
28
290. Structure of PCI drivers
30~~~~~~~~~~~~~~~~~~~~~~~~~~~
31PCI drivers "discover" PCI devices in a system via pci_register_driver().
32Actually, it's the other way around. When the PCI generic code discovers
33a new device, the driver with a matching "description" will be notified.
34Details on this below.
35
36pci_register_driver() leaves most of the probing for devices to
37the PCI layer and supports online insertion/removal of devices [thus
38supporting hot-pluggable PCI, CardBus, and Express-Card in a single driver].
39pci_register_driver() call requires passing in a table of function
40pointers and thus dictates the high level structure of a driver.
41
42Once the driver knows about a PCI device and takes ownership, the
43driver generally needs to perform the following initialization:
44
45 Enable the device
46 Request MMIO/IOP resources
47 Set the DMA mask size (for both coherent and streaming DMA)
48 Allocate and initialize shared control data (pci_allocate_coherent())
49 Access device configuration space (if needed)
50 Register IRQ handler (request_irq())
51 Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip)
52 Enable DMA/processing engines
53
54When done using the device, and perhaps the module needs to be unloaded,
55the driver needs to take the follow steps:
56 Disable the device from generating IRQs
57 Release the IRQ (free_irq())
58 Stop all DMA activity
59 Release DMA buffers (both streaming and coherent)
60 Unregister from other subsystems (e.g. scsi or netdev)
61 Release MMIO/IOP resources
62 Disable the device
63
64Most of these topics are covered in the following sections.
65For the rest look at LDD3 or <linux/pci.h> .
66
67If the PCI subsystem is not configured (CONFIG_PCI is not set), most of
68the PCI functions described below are defined as inline functions either
69completely empty or just returning an appropriate error codes to avoid
70lots of ifdefs in the drivers.
71
72
73
741. pci_register_driver() call
75~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
76
77PCI device drivers call pci_register_driver() during their
78initialization with a pointer to a structure describing the driver
79(struct pci_driver):
80
81 field name Description
82 ---------- ------------------------------------------------------
83 id_table Pointer to table of device ID's the driver is
84 interested in. Most drivers should export this
85 table using MODULE_DEVICE_TABLE(pci,...).
86
87 probe This probing function gets called (during execution
88 of pci_register_driver() for already existing
89 devices or later if a new device gets inserted) for
90 all PCI devices which match the ID table and are not
91 "owned" by the other drivers yet. This function gets
92 passed a "struct pci_dev *" for each device whose
93 entry in the ID table matches the device. The probe
94 function returns zero when the driver chooses to
95 take "ownership" of the device or an error code
96 (negative number) otherwise.
97 The probe function always gets called from process
98 context, so it can sleep.
99
100 remove The remove() function gets called whenever a device
101 being handled by this driver is removed (either during
102 deregistration of the driver or when it's manually
103 pulled out of a hot-pluggable slot).
104 The remove function always gets called from process
105 context, so it can sleep.
106
107 suspend Put device into low power state.
108 suspend_late Put device into low power state.
109
110 resume_early Wake device from low power state.
111 resume Wake device from low power state.
112
113 (Please see Documentation/power/pci.txt for descriptions
114 of PCI Power Management and the related functions.)
115
116 shutdown Hook into reboot_notifier_list (kernel/sys.c).
117 Intended to stop any idling DMA operations.
118 Useful for enabling wake-on-lan (NIC) or changing
119 the power state of a device before reboot.
120 e.g. drivers/net/e100.c.
121
122 err_handler See Documentation/PCI/pci-error-recovery.txt
123
124
125The ID table is an array of struct pci_device_id entries ending with an
126all-zero entry; use of the macro DEFINE_PCI_DEVICE_TABLE is the preferred
127method of declaring the table. Each entry consists of:
128
129 vendor,device Vendor and device ID to match (or PCI_ANY_ID)
130
131 subvendor, Subsystem vendor and device ID to match (or PCI_ANY_ID)
132 subdevice,
133
134 class Device class, subclass, and "interface" to match.
135 See Appendix D of the PCI Local Bus Spec or
136 include/linux/pci_ids.h for a full list of classes.
137 Most drivers do not need to specify class/class_mask
138 as vendor/device is normally sufficient.
139
140 class_mask limit which sub-fields of the class field are compared.
141 See drivers/scsi/sym53c8xx_2/ for example of usage.
142
143 driver_data Data private to the driver.
144 Most drivers don't need to use driver_data field.
145 Best practice is to use driver_data as an index
146 into a static list of equivalent device types,
147 instead of using it as a pointer.
148
149
150Most drivers only need PCI_DEVICE() or PCI_DEVICE_CLASS() to set up
151a pci_device_id table.
152
153New PCI IDs may be added to a device driver pci_ids table at runtime
154as shown below:
155
156echo "vendor device subvendor subdevice class class_mask driver_data" > \
157/sys/bus/pci/drivers/{driver}/new_id
158
159All fields are passed in as hexadecimal values (no leading 0x).
160The vendor and device fields are mandatory, the others are optional. Users
161need pass only as many optional fields as necessary:
162 o subvendor and subdevice fields default to PCI_ANY_ID (FFFFFFFF)
163 o class and classmask fields default to 0
164 o driver_data defaults to 0UL.
165
166Once added, the driver probe routine will be invoked for any unclaimed
167PCI devices listed in its (newly updated) pci_ids list.
168
169When the driver exits, it just calls pci_unregister_driver() and the PCI layer
170automatically calls the remove hook for all devices handled by the driver.
171
172
1731.1 "Attributes" for driver functions/data
174
175Please mark the initialization and cleanup functions where appropriate
176(the corresponding macros are defined in <linux/init.h>):
177
178 __init Initialization code. Thrown away after the driver
179 initializes.
180 __exit Exit code. Ignored for non-modular drivers.
181
182
183 __devinit Device initialization code.
184 Identical to __init if the kernel is not compiled
185 with CONFIG_HOTPLUG, normal function otherwise.
186 __devexit The same for __exit.
187
188Tips on when/where to use the above attributes:
189 o The module_init()/module_exit() functions (and all
190 initialization functions called _only_ from these)
191 should be marked __init/__exit.
192
193 o Do not mark the struct pci_driver.
194
195 o The ID table array should be marked __devinitconst; this is done
196 automatically if the table is declared with DEFINE_PCI_DEVICE_TABLE().
197
198 o The probe() and remove() functions should be marked __devinit
199 and __devexit respectively. All initialization functions
200 exclusively called by the probe() routine, can be marked __devinit.
201 Ditto for remove() and __devexit.
202
203 o If mydriver_remove() is marked with __devexit(), then all address
204 references to mydriver_remove must use __devexit_p(mydriver_remove)
205 (in the struct pci_driver declaration for example).
206 __devexit_p() will generate the function name _or_ NULL if the
207 function will be discarded. For an example, see drivers/net/tg3.c.
208
209 o Do NOT mark a function if you are not sure which mark to use.
210 Better to not mark the function than mark the function wrong.
211
212
213
2142. How to find PCI devices manually
215~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
216
217PCI drivers should have a really good reason for not using the
218pci_register_driver() interface to search for PCI devices.
219The main reason PCI devices are controlled by multiple drivers
220is because one PCI device implements several different HW services.
221E.g. combined serial/parallel port/floppy controller.
222
223A manual search may be performed using the following constructs:
224
225Searching by vendor and device ID:
226
227 struct pci_dev *dev = NULL;
228 while (dev = pci_get_device(VENDOR_ID, DEVICE_ID, dev))
229 configure_device(dev);
230
231Searching by class ID (iterate in a similar way):
232
233 pci_get_class(CLASS_ID, dev)
234
235Searching by both vendor/device and subsystem vendor/device ID:
236
237 pci_get_subsys(VENDOR_ID,DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev).
238
239You can use the constant PCI_ANY_ID as a wildcard replacement for
240VENDOR_ID or DEVICE_ID. This allows searching for any device from a
241specific vendor, for example.
242
243These functions are hotplug-safe. They increment the reference count on
244the pci_dev that they return. You must eventually (possibly at module unload)
245decrement the reference count on these devices by calling pci_dev_put().
246
247
248
2493. Device Initialization Steps
250~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
251
252As noted in the introduction, most PCI drivers need the following steps
253for device initialization:
254
255 Enable the device
256 Request MMIO/IOP resources
257 Set the DMA mask size (for both coherent and streaming DMA)
258 Allocate and initialize shared control data (pci_allocate_coherent())
259 Access device configuration space (if needed)
260 Register IRQ handler (request_irq())
261 Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip)
262 Enable DMA/processing engines.
263
264The driver can access PCI config space registers at any time.
265(Well, almost. When running BIST, config space can go away...but
266that will just result in a PCI Bus Master Abort and config reads
267will return garbage).
268
269
2703.1 Enable the PCI device
271~~~~~~~~~~~~~~~~~~~~~~~~~
272Before touching any device registers, the driver needs to enable
273the PCI device by calling pci_enable_device(). This will:
274 o wake up the device if it was in suspended state,
275 o allocate I/O and memory regions of the device (if BIOS did not),
276 o allocate an IRQ (if BIOS did not).
277
278NOTE: pci_enable_device() can fail! Check the return value.
279
280[ OS BUG: we don't check resource allocations before enabling those
281 resources. The sequence would make more sense if we called
282 pci_request_resources() before calling pci_enable_device().
283 Currently, the device drivers can't detect the bug when when two
284 devices have been allocated the same range. This is not a common
285 problem and unlikely to get fixed soon.
286
287 This has been discussed before but not changed as of 2.6.19:
288 http://lkml.org/lkml/2006/3/2/194
289]
290
291pci_set_master() will enable DMA by setting the bus master bit
292in the PCI_COMMAND register. It also fixes the latency timer value if
293it's set to something bogus by the BIOS.
294
295If the PCI device can use the PCI Memory-Write-Invalidate transaction,
296call pci_set_mwi(). This enables the PCI_COMMAND bit for Mem-Wr-Inval
297and also ensures that the cache line size register is set correctly.
298Check the return value of pci_set_mwi() as not all architectures
299or chip-sets may support Memory-Write-Invalidate. Alternatively,
300if Mem-Wr-Inval would be nice to have but is not required, call
301pci_try_set_mwi() to have the system do its best effort at enabling
302Mem-Wr-Inval.
303
304
3053.2 Request MMIO/IOP resources
306~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
307Memory (MMIO), and I/O port addresses should NOT be read directly
308from the PCI device config space. Use the values in the pci_dev structure
309as the PCI "bus address" might have been remapped to a "host physical"
310address by the arch/chip-set specific kernel support.
311
312See Documentation/IO-mapping.txt for how to access device registers
313or device memory.
314
315The device driver needs to call pci_request_region() to verify
316no other device is already using the same address resource.
317Conversely, drivers should call pci_release_region() AFTER
318calling pci_disable_device().
319The idea is to prevent two devices colliding on the same address range.
320
321[ See OS BUG comment above. Currently (2.6.19), The driver can only
322 determine MMIO and IO Port resource availability _after_ calling
323 pci_enable_device(). ]
324
325Generic flavors of pci_request_region() are request_mem_region()
326(for MMIO ranges) and request_region() (for IO Port ranges).
327Use these for address resources that are not described by "normal" PCI
328BARs.
329
330Also see pci_request_selected_regions() below.
331
332
3333.3 Set the DMA mask size
334~~~~~~~~~~~~~~~~~~~~~~~~~
335[ If anything below doesn't make sense, please refer to
336 Documentation/DMA-API.txt. This section is just a reminder that
337 drivers need to indicate DMA capabilities of the device and is not
338 an authoritative source for DMA interfaces. ]
339
340While all drivers should explicitly indicate the DMA capability
341(e.g. 32 or 64 bit) of the PCI bus master, devices with more than
34232-bit bus master capability for streaming data need the driver
343to "register" this capability by calling pci_set_dma_mask() with
344appropriate parameters. In general this allows more efficient DMA
345on systems where System RAM exists above 4G _physical_ address.
346
347Drivers for all PCI-X and PCIe compliant devices must call
348pci_set_dma_mask() as they are 64-bit DMA devices.
349
350Similarly, drivers must also "register" this capability if the device
351can directly address "consistent memory" in System RAM above 4G physical
352address by calling pci_set_consistent_dma_mask().
353Again, this includes drivers for all PCI-X and PCIe compliant devices.
354Many 64-bit "PCI" devices (before PCI-X) and some PCI-X devices are
35564-bit DMA capable for payload ("streaming") data but not control
356("consistent") data.
357
358
3593.4 Setup shared control data
360~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
361Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared)
362memory. See Documentation/DMA-API.txt for a full description of
363the DMA APIs. This section is just a reminder that it needs to be done
364before enabling DMA on the device.
365
366
3673.5 Initialize device registers
368~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
369Some drivers will need specific "capability" fields programmed
370or other "vendor specific" register initialized or reset.
371E.g. clearing pending interrupts.
372
373
3743.6 Register IRQ handler
375~~~~~~~~~~~~~~~~~~~~~~~~
376While calling request_irq() is the last step described here,
377this is often just another intermediate step to initialize a device.
378This step can often be deferred until the device is opened for use.
379
380All interrupt handlers for IRQ lines should be registered with IRQF_SHARED
381and use the devid to map IRQs to devices (remember that all PCI IRQ lines
382can be shared).
383
384request_irq() will associate an interrupt handler and device handle
385with an interrupt number. Historically interrupt numbers represent
386IRQ lines which run from the PCI device to the Interrupt controller.
387With MSI and MSI-X (more below) the interrupt number is a CPU "vector".
388
389request_irq() also enables the interrupt. Make sure the device is
390quiesced and does not have any interrupts pending before registering
391the interrupt handler.
392
393MSI and MSI-X are PCI capabilities. Both are "Message Signaled Interrupts"
394which deliver interrupts to the CPU via a DMA write to a Local APIC.
395The fundamental difference between MSI and MSI-X is how multiple
396"vectors" get allocated. MSI requires contiguous blocks of vectors
397while MSI-X can allocate several individual ones.
398
399MSI capability can be enabled by calling pci_enable_msi() or
400pci_enable_msix() before calling request_irq(). This causes
401the PCI support to program CPU vector data into the PCI device
402capability registers.
403
404If your PCI device supports both, try to enable MSI-X first.
405Only one can be enabled at a time. Many architectures, chip-sets,
406or BIOSes do NOT support MSI or MSI-X and the call to pci_enable_msi/msix
407will fail. This is important to note since many drivers have
408two (or more) interrupt handlers: one for MSI/MSI-X and another for IRQs.
409They choose which handler to register with request_irq() based on the
410return value from pci_enable_msi/msix().
411
412There are (at least) two really good reasons for using MSI:
4131) MSI is an exclusive interrupt vector by definition.
414 This means the interrupt handler doesn't have to verify
415 its device caused the interrupt.
416
4172) MSI avoids DMA/IRQ race conditions. DMA to host memory is guaranteed
418 to be visible to the host CPU(s) when the MSI is delivered. This
419 is important for both data coherency and avoiding stale control data.
420 This guarantee allows the driver to omit MMIO reads to flush
421 the DMA stream.
422
423See drivers/infiniband/hw/mthca/ or drivers/net/tg3.c for examples
424of MSI/MSI-X usage.
425
426
427
4284. PCI device shutdown
429~~~~~~~~~~~~~~~~~~~~~~~
430
431When a PCI device driver is being unloaded, most of the following
432steps need to be performed:
433
434 Disable the device from generating IRQs
435 Release the IRQ (free_irq())
436 Stop all DMA activity
437 Release DMA buffers (both streaming and consistent)
438 Unregister from other subsystems (e.g. scsi or netdev)
439 Disable device from responding to MMIO/IO Port addresses
440 Release MMIO/IO Port resource(s)
441
442
4434.1 Stop IRQs on the device
444~~~~~~~~~~~~~~~~~~~~~~~~~~~
445How to do this is chip/device specific. If it's not done, it opens
446the possibility of a "screaming interrupt" if (and only if)
447the IRQ is shared with another device.
448
449When the shared IRQ handler is "unhooked", the remaining devices
450using the same IRQ line will still need the IRQ enabled. Thus if the
451"unhooked" device asserts IRQ line, the system will respond assuming
452it was one of the remaining devices asserted the IRQ line. Since none
453of the other devices will handle the IRQ, the system will "hang" until
454it decides the IRQ isn't going to get handled and masks the IRQ (100,000
455iterations later). Once the shared IRQ is masked, the remaining devices
456will stop functioning properly. Not a nice situation.
457
458This is another reason to use MSI or MSI-X if it's available.
459MSI and MSI-X are defined to be exclusive interrupts and thus
460are not susceptible to the "screaming interrupt" problem.
461
462
4634.2 Release the IRQ
464~~~~~~~~~~~~~~~~~~~
465Once the device is quiesced (no more IRQs), one can call free_irq().
466This function will return control once any pending IRQs are handled,
467"unhook" the drivers IRQ handler from that IRQ, and finally release
468the IRQ if no one else is using it.
469
470
4714.3 Stop all DMA activity
472~~~~~~~~~~~~~~~~~~~~~~~~~
473It's extremely important to stop all DMA operations BEFORE attempting
474to deallocate DMA control data. Failure to do so can result in memory
475corruption, hangs, and on some chip-sets a hard crash.
476
477Stopping DMA after stopping the IRQs can avoid races where the
478IRQ handler might restart DMA engines.
479
480While this step sounds obvious and trivial, several "mature" drivers
481didn't get this step right in the past.
482
483
4844.4 Release DMA buffers
485~~~~~~~~~~~~~~~~~~~~~~~
486Once DMA is stopped, clean up streaming DMA first.
487I.e. unmap data buffers and return buffers to "upstream"
488owners if there is one.
489
490Then clean up "consistent" buffers which contain the control data.
491
492See Documentation/DMA-API.txt for details on unmapping interfaces.
493
494
4954.5 Unregister from other subsystems
496~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
497Most low level PCI device drivers support some other subsystem
498like USB, ALSA, SCSI, NetDev, Infiniband, etc. Make sure your
499driver isn't losing resources from that other subsystem.
500If this happens, typically the symptom is an Oops (panic) when
501the subsystem attempts to call into a driver that has been unloaded.
502
503
5044.6 Disable Device from responding to MMIO/IO Port addresses
505~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
506io_unmap() MMIO or IO Port resources and then call pci_disable_device().
507This is the symmetric opposite of pci_enable_device().
508Do not access device registers after calling pci_disable_device().
509
510
5114.7 Release MMIO/IO Port Resource(s)
512~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
513Call pci_release_region() to mark the MMIO or IO Port range as available.
514Failure to do so usually results in the inability to reload the driver.
515
516
517
5185. How to access PCI config space
519~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
520
521You can use pci_(read|write)_config_(byte|word|dword) to access the config
522space of a device represented by struct pci_dev *. All these functions return 0
523when successful or an error code (PCIBIOS_...) which can be translated to a text
524string by pcibios_strerror. Most drivers expect that accesses to valid PCI
525devices don't fail.
526
527If you don't have a struct pci_dev available, you can call
528pci_bus_(read|write)_config_(byte|word|dword) to access a given device
529and function on that bus.
530
531If you access fields in the standard portion of the config header, please
532use symbolic names of locations and bits declared in <linux/pci.h>.
533
534If you need to access Extended PCI Capability registers, just call
535pci_find_capability() for the particular capability and it will find the
536corresponding register block for you.
537
538
539
5406. Other interesting functions
541~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
542
543pci_find_slot() Find pci_dev corresponding to given bus and
544 slot numbers.
545pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3)
546pci_find_capability() Find specified capability in device's capability
547 list.
548pci_resource_start() Returns bus start address for a given PCI region
549pci_resource_end() Returns bus end address for a given PCI region
550pci_resource_len() Returns the byte length of a PCI region
551pci_set_drvdata() Set private driver data pointer for a pci_dev
552pci_get_drvdata() Return private driver data pointer for a pci_dev
553pci_set_mwi() Enable Memory-Write-Invalidate transactions.
554pci_clear_mwi() Disable Memory-Write-Invalidate transactions.
555
556
557
5587. Miscellaneous hints
559~~~~~~~~~~~~~~~~~~~~~~
560
561When displaying PCI device names to the user (for example when a driver wants
562to tell the user what card has it found), please use pci_name(pci_dev).
563
564Always refer to the PCI devices by a pointer to the pci_dev structure.
565All PCI layer functions use this identification and it's the only
566reasonable one. Don't use bus/slot/function numbers except for very
567special purposes -- on systems with multiple primary buses their semantics
568can be pretty complex.
569
570Don't try to turn on Fast Back to Back writes in your driver. All devices
571on the bus need to be capable of doing it, so this is something which needs
572to be handled by platform and generic code, not individual drivers.
573
574
575
5768. Vendor and device identifications
577~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
578
579One is not not required to add new device ids to include/linux/pci_ids.h.
580Please add PCI_VENDOR_ID_xxx for vendors and a hex constant for device ids.
581
582PCI_VENDOR_ID_xxx constants are re-used. The device ids are arbitrary
583hex numbers (vendor controlled) and normally used only in a single
584location, the pci_device_id table.
585
586Please DO submit new vendor/device ids to pciids.sourceforge.net project.
587
588
589
5909. Obsolete functions
591~~~~~~~~~~~~~~~~~~~~~
592
593There are several functions which you might come across when trying to
594port an old driver to the new PCI interface. They are no longer present
595in the kernel as they aren't compatible with hotplug or PCI domains or
596having sane locking.
597
598pci_find_device() Superseded by pci_get_device()
599pci_find_subsys() Superseded by pci_get_subsys()
600pci_find_slot() Superseded by pci_get_slot()
601
602
603The alternative is the traditional PCI device driver that walks PCI
604device lists. This is still possible but discouraged.
605
606
607
60810. MMIO Space and "Write Posting"
609~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
610
611Converting a driver from using I/O Port space to using MMIO space
612often requires some additional changes. Specifically, "write posting"
613needs to be handled. Many drivers (e.g. tg3, acenic, sym53c8xx_2)
614already do this. I/O Port space guarantees write transactions reach the PCI
615device before the CPU can continue. Writes to MMIO space allow the CPU
616to continue before the transaction reaches the PCI device. HW weenies
617call this "Write Posting" because the write completion is "posted" to
618the CPU before the transaction has reached its destination.
619
620Thus, timing sensitive code should add readl() where the CPU is
621expected to wait before doing other work. The classic "bit banging"
622sequence works fine for I/O Port space:
623
624 for (i = 8; --i; val >>= 1) {
625 outb(val & 1, ioport_reg); /* write bit */
626 udelay(10);
627 }
628
629The same sequence for MMIO space should be:
630
631 for (i = 8; --i; val >>= 1) {
632 writeb(val & 1, mmio_reg); /* write bit */
633 readb(safe_mmio_reg); /* flush posted write */
634 udelay(10);
635 }
636
637It is important that "safe_mmio_reg" not have any side effects that
638interferes with the correct operation of the device.
639
640Another case to watch out for is when resetting a PCI device. Use PCI
641Configuration space reads to flush the writel(). This will gracefully
642handle the PCI master abort on all platforms if the PCI device is
643expected to not respond to a readl(). Most x86 platforms will allow
644MMIO reads to master abort (a.k.a. "Soft Fail") and return garbage
645(e.g. ~0). But many RISC platforms will crash (a.k.a."Hard Fail").
646
diff --git a/Documentation/PCI/pcieaer-howto.txt b/Documentation/PCI/pcieaer-howto.txt
new file mode 100644
index 000000000000..16c251230c82
--- /dev/null
+++ b/Documentation/PCI/pcieaer-howto.txt
@@ -0,0 +1,253 @@
1 The PCI Express Advanced Error Reporting Driver Guide HOWTO
2 T. Long Nguyen <tom.l.nguyen@intel.com>
3 Yanmin Zhang <yanmin.zhang@intel.com>
4 07/29/2006
5
6
71. Overview
8
91.1 About this guide
10
11This guide describes the basics of the PCI Express Advanced Error
12Reporting (AER) driver and provides information on how to use it, as
13well as how to enable the drivers of endpoint devices to conform with
14PCI Express AER driver.
15
161.2 Copyright © Intel Corporation 2006.
17
181.3 What is the PCI Express AER Driver?
19
20PCI Express error signaling can occur on the PCI Express link itself
21or on behalf of transactions initiated on the link. PCI Express
22defines two error reporting paradigms: the baseline capability and
23the Advanced Error Reporting capability. The baseline capability is
24required of all PCI Express components providing a minimum defined
25set of error reporting requirements. Advanced Error Reporting
26capability is implemented with a PCI Express advanced error reporting
27extended capability structure providing more robust error reporting.
28
29The PCI Express AER driver provides the infrastructure to support PCI
30Express Advanced Error Reporting capability. The PCI Express AER
31driver provides three basic functions:
32
33- Gathers the comprehensive error information if errors occurred.
34- Reports error to the users.
35- Performs error recovery actions.
36
37AER driver only attaches root ports which support PCI-Express AER
38capability.
39
40
412. User Guide
42
432.1 Include the PCI Express AER Root Driver into the Linux Kernel
44
45The PCI Express AER Root driver is a Root Port service driver attached
46to the PCI Express Port Bus driver. If a user wants to use it, the driver
47has to be compiled. Option CONFIG_PCIEAER supports this capability. It
48depends on CONFIG_PCIEPORTBUS, so pls. set CONFIG_PCIEPORTBUS=y and
49CONFIG_PCIEAER = y.
50
512.2 Load PCI Express AER Root Driver
52There is a case where a system has AER support in BIOS. Enabling the AER
53Root driver and having AER support in BIOS may result unpredictable
54behavior. To avoid this conflict, a successful load of the AER Root driver
55requires ACPI _OSC support in the BIOS to allow the AER Root driver to
56request for native control of AER. See the PCI FW 3.0 Specification for
57details regarding OSC usage. Currently, lots of firmwares don't provide
58_OSC support while they use PCI Express. To support such firmwares,
59forceload, a parameter of type bool, could enable AER to continue to
60be initiated although firmwares have no _OSC support. To enable the
61walkaround, pls. add aerdriver.forceload=y to kernel boot parameter line
62when booting kernel. Note that forceload=n by default.
63
642.3 AER error output
65When a PCI-E AER error is captured, an error message will be outputed to
66console. If it's a correctable error, it is outputed as a warning.
67Otherwise, it is printed as an error. So users could choose different
68log level to filter out correctable error messages.
69
70Below shows an example.
71+------ PCI-Express Device Error -----+
72Error Severity : Uncorrected (Fatal)
73PCIE Bus Error type : Transaction Layer
74Unsupported Request : First
75Requester ID : 0500
76VendorID=8086h, DeviceID=0329h, Bus=05h, Device=00h, Function=00h
77TLB Header:
7804000001 00200a03 05010000 00050100
79
80In the example, 'Requester ID' means the ID of the device who sends
81the error message to root port. Pls. refer to pci express specs for
82other fields.
83
84
853. Developer Guide
86
87To enable AER aware support requires a software driver to configure
88the AER capability structure within its device and to provide callbacks.
89
90To support AER better, developers need understand how AER does work
91firstly.
92
93PCI Express errors are classified into two types: correctable errors
94and uncorrectable errors. This classification is based on the impacts
95of those errors, which may result in degraded performance or function
96failure.
97
98Correctable errors pose no impacts on the functionality of the
99interface. The PCI Express protocol can recover without any software
100intervention or any loss of data. These errors are detected and
101corrected by hardware. Unlike correctable errors, uncorrectable
102errors impact functionality of the interface. Uncorrectable errors
103can cause a particular transaction or a particular PCI Express link
104to be unreliable. Depending on those error conditions, uncorrectable
105errors are further classified into non-fatal errors and fatal errors.
106Non-fatal errors cause the particular transaction to be unreliable,
107but the PCI Express link itself is fully functional. Fatal errors, on
108the other hand, cause the link to be unreliable.
109
110When AER is enabled, a PCI Express device will automatically send an
111error message to the PCIE root port above it when the device captures
112an error. The Root Port, upon receiving an error reporting message,
113internally processes and logs the error message in its PCI Express
114capability structure. Error information being logged includes storing
115the error reporting agent's requestor ID into the Error Source
116Identification Registers and setting the error bits of the Root Error
117Status Register accordingly. If AER error reporting is enabled in Root
118Error Command Register, the Root Port generates an interrupt if an
119error is detected.
120
121Note that the errors as described above are related to the PCI Express
122hierarchy and links. These errors do not include any device specific
123errors because device specific errors will still get sent directly to
124the device driver.
125
1263.1 Configure the AER capability structure
127
128AER aware drivers of PCI Express component need change the device
129control registers to enable AER. They also could change AER registers,
130including mask and severity registers. Helper function
131pci_enable_pcie_error_reporting could be used to enable AER. See
132section 3.3.
133
1343.2. Provide callbacks
135
1363.2.1 callback reset_link to reset pci express link
137
138This callback is used to reset the pci express physical link when a
139fatal error happens. The root port aer service driver provides a
140default reset_link function, but different upstream ports might
141have different specifications to reset pci express link, so all
142upstream ports should provide their own reset_link functions.
143
144In struct pcie_port_service_driver, a new pointer, reset_link, is
145added.
146
147pci_ers_result_t (*reset_link) (struct pci_dev *dev);
148
149Section 3.2.2.2 provides more detailed info on when to call
150reset_link.
151
1523.2.2 PCI error-recovery callbacks
153
154The PCI Express AER Root driver uses error callbacks to coordinate
155with downstream device drivers associated with a hierarchy in question
156when performing error recovery actions.
157
158Data struct pci_driver has a pointer, err_handler, to point to
159pci_error_handlers who consists of a couple of callback function
160pointers. AER driver follows the rules defined in
161pci-error-recovery.txt except pci express specific parts (e.g.
162reset_link). Pls. refer to pci-error-recovery.txt for detailed
163definitions of the callbacks.
164
165Below sections specify when to call the error callback functions.
166
1673.2.2.1 Correctable errors
168
169Correctable errors pose no impacts on the functionality of
170the interface. The PCI Express protocol can recover without any
171software intervention or any loss of data. These errors do not
172require any recovery actions. The AER driver clears the device's
173correctable error status register accordingly and logs these errors.
174
1753.2.2.2 Non-correctable (non-fatal and fatal) errors
176
177If an error message indicates a non-fatal error, performing link reset
178at upstream is not required. The AER driver calls error_detected(dev,
179pci_channel_io_normal) to all drivers associated within a hierarchy in
180question. for example,
181EndPoint<==>DownstreamPort B<==>UpstreamPort A<==>RootPort.
182If Upstream port A captures an AER error, the hierarchy consists of
183Downstream port B and EndPoint.
184
185A driver may return PCI_ERS_RESULT_CAN_RECOVER,
186PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on
187whether it can recover or the AER driver calls mmio_enabled as next.
188
189If an error message indicates a fatal error, kernel will broadcast
190error_detected(dev, pci_channel_io_frozen) to all drivers within
191a hierarchy in question. Then, performing link reset at upstream is
192necessary. As different kinds of devices might use different approaches
193to reset link, AER port service driver is required to provide the
194function to reset link. Firstly, kernel looks for if the upstream
195component has an aer driver. If it has, kernel uses the reset_link
196callback of the aer driver. If the upstream component has no aer driver
197and the port is downstream port, we will use the aer driver of the
198root port who reports the AER error. As for upstream ports,
199they should provide their own aer service drivers with reset_link
200function. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER and
201reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
202to mmio_enabled.
203
2043.3 helper functions
205
2063.3.1 int pci_find_aer_capability(struct pci_dev *dev);
207pci_find_aer_capability locates the PCI Express AER capability
208in the device configuration space. If the device doesn't support
209PCI-Express AER, the function returns 0.
210
2113.3.2 int pci_enable_pcie_error_reporting(struct pci_dev *dev);
212pci_enable_pcie_error_reporting enables the device to send error
213messages to root port when an error is detected. Note that devices
214don't enable the error reporting by default, so device drivers need
215call this function to enable it.
216
2173.3.3 int pci_disable_pcie_error_reporting(struct pci_dev *dev);
218pci_disable_pcie_error_reporting disables the device to send error
219messages to root port when an error is detected.
220
2213.3.4 int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev);
222pci_cleanup_aer_uncorrect_error_status cleanups the uncorrectable
223error status register.
224
2253.4 Frequent Asked Questions
226
227Q: What happens if a PCI Express device driver does not provide an
228error recovery handler (pci_driver->err_handler is equal to NULL)?
229
230A: The devices attached with the driver won't be recovered. If the
231error is fatal, kernel will print out warning messages. Please refer
232to section 3 for more information.
233
234Q: What happens if an upstream port service driver does not provide
235callback reset_link?
236
237A: Fatal error recovery will fail if the errors are reported by the
238upstream ports who are attached by the service driver.
239
240Q: How does this infrastructure deal with driver that is not PCI
241Express aware?
242
243A: This infrastructure calls the error callback functions of the
244driver when an error happens. But if the driver is not aware of
245PCI Express, the device might not report its own errors to root
246port.
247
248Q: What modifications will that driver need to make it compatible
249with the PCI Express AER Root driver?
250
251A: It could call the helper functions to enable AER in devices and
252cleanup uncorrectable status register. Pls. refer to section 3.3.
253