aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/PCI
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/PCI')
-rw-r--r--Documentation/PCI/MSI-HOWTO.txt814
-rw-r--r--Documentation/PCI/pci-iov-howto.txt99
2 files changed, 431 insertions, 482 deletions
diff --git a/Documentation/PCI/MSI-HOWTO.txt b/Documentation/PCI/MSI-HOWTO.txt
index 256defd7e174..dcf7acc720e1 100644
--- a/Documentation/PCI/MSI-HOWTO.txt
+++ b/Documentation/PCI/MSI-HOWTO.txt
@@ -4,506 +4,356 @@
4 Revised Feb 12, 2004 by Martine Silbermann 4 Revised Feb 12, 2004 by Martine Silbermann
5 email: Martine.Silbermann@hp.com 5 email: Martine.Silbermann@hp.com
6 Revised Jun 25, 2004 by Tom L Nguyen 6 Revised Jun 25, 2004 by Tom L Nguyen
7 Revised Jul 9, 2008 by Matthew Wilcox <willy@linux.intel.com>
8 Copyright 2003, 2008 Intel Corporation
7 9
81. About this guide 101. About this guide
9 11
10This guide describes the basics of Message Signaled Interrupts (MSI), 12This guide describes the basics of Message Signaled Interrupts (MSIs),
11the advantages of using MSI over traditional interrupt mechanisms, 13the advantages of using MSI over traditional interrupt mechanisms, how
12and how to enable your driver to use MSI or MSI-X. Also included is 14to change your driver to use MSI or MSI-X and some basic diagnostics to
13a Frequently Asked Questions (FAQ) section. 15try if a device doesn't support MSIs.
14
151.1 Terminology
16
17PCI devices can be single-function or multi-function. In either case,
18when this text talks about enabling or disabling MSI on a "device
19function," it is referring to one specific PCI device and function and
20not to all functions on a PCI device (unless the PCI device has only
21one function).
22
232. Copyright 2003 Intel Corporation
24
253. What is MSI/MSI-X?
26
27Message Signaled Interrupt (MSI), as described in the PCI Local Bus
28Specification Revision 2.3 or later, is an optional feature, and a
29required feature for PCI Express devices. MSI enables a device function
30to request service by sending an Inbound Memory Write on its PCI bus to
31the FSB as a Message Signal Interrupt transaction. Because MSI is
32generated in the form of a Memory Write, all transaction conditions,
33such as a Retry, Master-Abort, Target-Abort or normal completion, are
34supported.
35
36A PCI device that supports MSI must also support pin IRQ assertion
37interrupt mechanism to provide backward compatibility for systems that
38do not support MSI. In systems which support MSI, the bus driver is
39responsible for initializing the message address and message data of
40the device function's MSI/MSI-X capability structure during device
41initial configuration.
42
43An MSI capable device function indicates MSI support by implementing
44the MSI/MSI-X capability structure in its PCI capability list. The
45device function may implement both the MSI capability structure and
46the MSI-X capability structure; however, the bus driver should not
47enable both.
48
49The MSI capability structure contains Message Control register,
50Message Address register and Message Data register. These registers
51provide the bus driver control over MSI. The Message Control register
52indicates the MSI capability supported by the device. The Message
53Address register specifies the target address and the Message Data
54register specifies the characteristics of the message. To request
55service, the device function writes the content of the Message Data
56register to the target address. The device and its software driver
57are prohibited from writing to these registers.
58
59The MSI-X capability structure is an optional extension to MSI. It
60uses an independent and separate capability structure. There are
61some key advantages to implementing the MSI-X capability structure
62over the MSI capability structure as described below.
63
64 - Support a larger maximum number of vectors per function.
65
66 - Provide the ability for system software to configure
67 each vector with an independent message address and message
68 data, specified by a table that resides in Memory Space.
69
70 - MSI and MSI-X both support per-vector masking. Per-vector
71 masking is an optional extension of MSI but a required
72 feature for MSI-X. Per-vector masking provides the kernel the
73 ability to mask/unmask a single MSI while running its
74 interrupt service routine. If per-vector masking is
75 not supported, then the device driver should provide the
76 hardware/software synchronization to ensure that the device
77 generates MSI when the driver wants it to do so.
78
794. Why use MSI?
80
81As a benefit to the simplification of board design, MSI allows board
82designers to remove out-of-band interrupt routing. MSI is another
83step towards a legacy-free environment.
84
85Due to increasing pressure on chipset and processor packages to
86reduce pin count, the need for interrupt pins is expected to
87diminish over time. Devices, due to pin constraints, may implement
88messages to increase performance.
89
90PCI Express endpoints uses INTx emulation (in-band messages) instead
91of IRQ pin assertion. Using INTx emulation requires interrupt
92sharing among devices connected to the same node (PCI bridge) while
93MSI is unique (non-shared) and does not require BIOS configuration
94support. As a result, the PCI Express technology requires MSI
95support for better interrupt performance.
96
97Using MSI enables the device functions to support two or more
98vectors, which can be configured to target different CPUs to
99increase scalability.
100
1015. Configuring a driver to use MSI/MSI-X
102
103By default, the kernel will not enable MSI/MSI-X on all devices that
104support this capability. The CONFIG_PCI_MSI kernel option
105must be selected to enable MSI/MSI-X support.
106
1075.1 Including MSI/MSI-X support into the kernel
108
109To allow MSI/MSI-X capable device drivers to selectively enable
110MSI/MSI-X (using pci_enable_msi()/pci_enable_msix() as described
111below), the VECTOR based scheme needs to be enabled by setting
112CONFIG_PCI_MSI during kernel config.
113
114Since the target of the inbound message is the local APIC, providing
115CONFIG_X86_LOCAL_APIC must be enabled as well as CONFIG_PCI_MSI.
116
1175.2 Configuring for MSI support
118
119Due to the non-contiguous fashion in vector assignment of the
120existing Linux kernel, this version does not support multiple
121messages regardless of a device function is capable of supporting
122more than one vector. To enable MSI on a device function's MSI
123capability structure requires a device driver to call the function
124pci_enable_msi() explicitly.
125
1265.2.1 API pci_enable_msi
127 16
128int pci_enable_msi(struct pci_dev *dev)
129 17
130With this new API, a device driver that wants to have MSI 182. What are MSIs?
131enabled on its device function must call this API to enable MSI.
132A successful call will initialize the MSI capability structure
133with ONE vector, regardless of whether a device function is
134capable of supporting multiple messages. This vector replaces the
135pre-assigned dev->irq with a new MSI vector. To avoid a conflict
136of the new assigned vector with existing pre-assigned vector requires
137a device driver to call this API before calling request_irq().
138 19
1395.2.2 API pci_disable_msi 20A Message Signaled Interrupt is a write from the device to a special
21address which causes an interrupt to be received by the CPU.
140 22
141void pci_disable_msi(struct pci_dev *dev) 23The MSI capability was first specified in PCI 2.2 and was later enhanced
24in PCI 3.0 to allow each interrupt to be masked individually. The MSI-X
25capability was also introduced with PCI 3.0. It supports more interrupts
26per device than MSI and allows interrupts to be independently configured.
142 27
143This API should always be used to undo the effect of pci_enable_msi() 28Devices may support both MSI and MSI-X, but only one can be enabled at
144when a device driver is unloading. This API restores dev->irq with 29a time.
145the pre-assigned IOAPIC vector and switches a device's interrupt
146mode to PCI pin-irq assertion/INTx emulation mode.
147
148Note that a device driver should always call free_irq() on the MSI vector
149that it has done request_irq() on before calling this API. Failure to do
150so results in a BUG_ON() and a device will be left with MSI enabled and
151leaks its vector.
152
1535.2.3 MSI mode vs. legacy mode diagram
154
155The below diagram shows the events which switch the interrupt
156mode on the MSI-capable device function between MSI mode and
157PIN-IRQ assertion mode.
158
159 ------------ pci_enable_msi ------------------------
160 | | <=============== | |
161 | MSI MODE | | PIN-IRQ ASSERTION MODE |
162 | | ===============> | |
163 ------------ pci_disable_msi ------------------------
164
165
166Figure 1. MSI Mode vs. Legacy Mode
167
168In Figure 1, a device operates by default in legacy mode. Legacy
169in this context means PCI pin-irq assertion or PCI-Express INTx
170emulation. A successful MSI request (using pci_enable_msi()) switches
171a device's interrupt mode to MSI mode. A pre-assigned IOAPIC vector
172stored in dev->irq will be saved by the PCI subsystem and a new
173assigned MSI vector will replace dev->irq.
174
175To return back to its default mode, a device driver should always call
176pci_disable_msi() to undo the effect of pci_enable_msi(). Note that a
177device driver should always call free_irq() on the MSI vector it has
178done request_irq() on before calling pci_disable_msi(). Failure to do
179so results in a BUG_ON() and a device will be left with MSI enabled and
180leaks its vector. Otherwise, the PCI subsystem restores a device's
181dev->irq with a pre-assigned IOAPIC vector and marks the released
182MSI vector as unused.
183
184Once being marked as unused, there is no guarantee that the PCI
185subsystem will reserve this MSI vector for a device. Depending on
186the availability of current PCI vector resources and the number of
187MSI/MSI-X requests from other drivers, this MSI may be re-assigned.
188
189For the case where the PCI subsystem re-assigns this MSI vector to
190another driver, a request to switch back to MSI mode may result
191in being assigned a different MSI vector or a failure if no more
192vectors are available.
193
1945.3 Configuring for MSI-X support
195
196Due to the ability of the system software to configure each vector of
197the MSI-X capability structure with an independent message address
198and message data, the non-contiguous fashion in vector assignment of
199the existing Linux kernel has no impact on supporting multiple
200messages on an MSI-X capable device functions. To enable MSI-X on
201a device function's MSI-X capability structure requires its device
202driver to call the function pci_enable_msix() explicitly.
203
204The function pci_enable_msix(), once invoked, enables either
205all or nothing, depending on the current availability of PCI vector
206resources. If the PCI vector resources are available for the number
207of vectors requested by a device driver, this function will configure
208the MSI-X table of the MSI-X capability structure of a device with
209requested messages. To emphasize this reason, for example, a device
210may be capable for supporting the maximum of 32 vectors while its
211software driver usually may request 4 vectors. It is recommended
212that the device driver should call this function once during the
213initialization phase of the device driver.
214
215Unlike the function pci_enable_msi(), the function pci_enable_msix()
216does not replace the pre-assigned IOAPIC dev->irq with a new MSI
217vector because the PCI subsystem writes the 1:1 vector-to-entry mapping
218into the field vector of each element contained in a second argument.
219Note that the pre-assigned IOAPIC dev->irq is valid only if the device
220operates in PIN-IRQ assertion mode. In MSI-X mode, any attempt at
221using dev->irq by the device driver to request for interrupt service
222may result in unpredictable behavior.
223
224For each MSI-X vector granted, a device driver is responsible for calling
225other functions like request_irq(), enable_irq(), etc. to enable
226this vector with its corresponding interrupt service handler. It is
227a device driver's choice to assign all vectors with the same
228interrupt service handler or each vector with a unique interrupt
229service handler.
230
2315.3.1 Handling MMIO address space of MSI-X Table
232
233The PCI 3.0 specification has implementation notes that MMIO address
234space for a device's MSI-X structure should be isolated so that the
235software system can set different pages for controlling accesses to the
236MSI-X structure. The implementation of MSI support requires the PCI
237subsystem, not a device driver, to maintain full control of the MSI-X
238table/MSI-X PBA (Pending Bit Array) and MMIO address space of the MSI-X
239table/MSI-X PBA. A device driver should not access the MMIO address
240space of the MSI-X table/MSI-X PBA.
241
2425.3.2 API pci_enable_msix
243 30
244int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
245 31
246This API enables a device driver to request the PCI subsystem 323. Why use MSIs?
247to enable MSI-X messages on its hardware device. Depending on 33
248the availability of PCI vectors resources, the PCI subsystem enables 34There are three reasons why using MSIs can give an advantage over
249either all or none of the requested vectors. 35traditional pin-based interrupts.
36
37Pin-based PCI interrupts are often shared amongst several devices.
38To support this, the kernel must call each interrupt handler associated
39with an interrupt, which leads to reduced performance for the system as
40a whole. MSIs are never shared, so this problem cannot arise.
41
42When a device writes data to memory, then raises a pin-based interrupt,
43it is possible that the interrupt may arrive before all the data has
44arrived in memory (this becomes more likely with devices behind PCI-PCI
45bridges). In order to ensure that all the data has arrived in memory,
46the interrupt handler must read a register on the device which raised
47the interrupt. PCI transaction ordering rules require that all the data
48arrives in memory before the value can be returned from the register.
49Using MSIs avoids this problem as the interrupt-generating write cannot
50pass the data writes, so by the time the interrupt is raised, the driver
51knows that all the data has arrived in memory.
52
53PCI devices can only support a single pin-based interrupt per function.
54Often drivers have to query the device to find out what event has
55occurred, slowing down interrupt handling for the common case. With
56MSIs, a device can support more interrupts, allowing each interrupt
57to be specialised to a different purpose. One possible design gives
58infrequent conditions (such as errors) their own interrupt which allows
59the driver to handle the normal interrupt handling path more efficiently.
60Other possible designs include giving one interrupt to each packet queue
61in a network card or each port in a storage controller.
62
63
644. How to use MSIs
65
66PCI devices are initialised to use pin-based interrupts. The device
67driver has to set up the device to use MSI or MSI-X. Not all machines
68support MSIs correctly, and for those machines, the APIs described below
69will simply fail and the device will continue to use pin-based interrupts.
70
714.1 Include kernel support for MSIs
72
73To support MSI or MSI-X, the kernel must be built with the CONFIG_PCI_MSI
74option enabled. This option is only available on some architectures,
75and it may depend on some other options also being set. For example,
76on x86, you must also enable X86_UP_APIC or SMP in order to see the
77CONFIG_PCI_MSI option.
78
794.2 Using MSI
80
81Most of the hard work is done for the driver in the PCI layer. It simply
82has to request that the PCI layer set up the MSI capability for this
83device.
84
854.2.1 pci_enable_msi
86
87int pci_enable_msi(struct pci_dev *dev)
88
89A successful call will allocate ONE interrupt to the device, regardless
90of how many MSIs the device supports. The device will be switched from
91pin-based interrupt mode to MSI mode. The dev->irq number is changed
92to a new number which represents the message signaled interrupt.
93This function should be called before the driver calls request_irq()
94since enabling MSIs disables the pin-based IRQ and the driver will not
95receive interrupts on the old interrupt.
96
974.2.2 pci_enable_msi_block
98
99int pci_enable_msi_block(struct pci_dev *dev, int count)
100
101This variation on the above call allows a device driver to request multiple
102MSIs. The MSI specification only allows interrupts to be allocated in
103powers of two, up to a maximum of 2^5 (32).
104
105If this function returns 0, it has succeeded in allocating at least as many
106interrupts as the driver requested (it may have allocated more in order
107to satisfy the power-of-two requirement). In this case, the function
108enables MSI on this device and updates dev->irq to be the lowest of
109the new interrupts assigned to it. The other interrupts assigned to
110the device are in the range dev->irq to dev->irq + count - 1.
111
112If this function returns a negative number, it indicates an error and
113the driver should not attempt to request any more MSI interrupts for
114this device. If this function returns a positive number, it will be
115less than 'count' and indicate the number of interrupts that could have
116been allocated. In neither case will the irq value have been
117updated, nor will the device have been switched into MSI mode.
118
119The device driver must decide what action to take if
120pci_enable_msi_block() returns a value less than the number asked for.
121Some devices can make use of fewer interrupts than the maximum they
122request; in this case the driver should call pci_enable_msi_block()
123again. Note that it is not guaranteed to succeed, even when the
124'count' has been reduced to the value returned from a previous call to
125pci_enable_msi_block(). This is because there are multiple constraints
126on the number of vectors that can be allocated; pci_enable_msi_block()
127will return as soon as it finds any constraint that doesn't allow the
128call to succeed.
129
1304.2.3 pci_disable_msi
131
132void pci_disable_msi(struct pci_dev *dev)
250 133
251Argument 'dev' points to the device (pci_dev) structure. 134This function should be used to undo the effect of pci_enable_msi() or
135pci_enable_msi_block(). Calling it restores dev->irq to the pin-based
136interrupt number and frees the previously allocated message signaled
137interrupt(s). The interrupt may subsequently be assigned to another
138device, so drivers should not cache the value of dev->irq.
252 139
253Argument 'entries' is a pointer to an array of msix_entry structs. 140A device driver must always call free_irq() on the interrupt(s)
254The number of entries is indicated in argument 'nvec'. 141for which it has called request_irq() before calling this function.
255struct msix_entry is defined in /driver/pci/msi.h: 142Failure to do so will result in a BUG_ON(), the device will be left with
143MSI enabled and will leak its vector.
144
1454.3 Using MSI-X
146
147The MSI-X capability is much more flexible than the MSI capability.
148It supports up to 2048 interrupts, each of which can be controlled
149independently. To support this flexibility, drivers must use an array of
150`struct msix_entry':
256 151
257struct msix_entry { 152struct msix_entry {
258 u16 vector; /* kernel uses to write alloc vector */ 153 u16 vector; /* kernel uses to write alloc vector */
259 u16 entry; /* driver uses to specify entry */ 154 u16 entry; /* driver uses to specify entry */
260}; 155};
261 156
262A device driver is responsible for initializing the field 'entry' of 157This allows for the device to use these interrupts in a sparse fashion;
263each element with a unique entry supported by MSI-X table. Otherwise, 158for example it could use interrupts 3 and 1027 and allocate only a
264-EINVAL will be returned as a result. A successful return of zero 159two-element array. The driver is expected to fill in the 'entry' value
265indicates the PCI subsystem completed initializing each of the requested 160in each element of the array to indicate which entries it wants the kernel
266entries of the MSI-X table with message address and message data. 161to assign interrupts for. It is invalid to fill in two entries with the
267Last but not least, the PCI subsystem will write the 1:1 162same number.
268vector-to-entry mapping into the field 'vector' of each element. A 163
269device driver is responsible for keeping track of allocated MSI-X 1644.3.1 pci_enable_msix
270vectors in its internal data structure. 165
271 166int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
272A return of zero indicates that the number of MSI-X vectors was 167
273successfully allocated. A return of greater than zero indicates 168Calling this function asks the PCI subsystem to allocate 'nvec' MSIs.
274MSI-X vector shortage. Or a return of less than zero indicates 169The 'entries' argument is a pointer to an array of msix_entry structs
275a failure. This failure may be a result of duplicate entries 170which should be at least 'nvec' entries in size. On success, the
276specified in second argument, or a result of no available vector, 171function will return 0 and the device will have been switched into
277or a result of failing to initialize MSI-X table entries. 172MSI-X interrupt mode. The 'vector' elements in each entry will have
278 173been filled in with the interrupt number. The driver should then call
2795.3.3 API pci_disable_msix 174request_irq() for each 'vector' that it decides to use.
175
176If this function returns a negative number, it indicates an error and
177the driver should not attempt to allocate any more MSI-X interrupts for
178this device. If it returns a positive number, it indicates the maximum
179number of interrupt vectors that could have been allocated. See example
180below.
181
182This function, in contrast with pci_enable_msi(), does not adjust
183dev->irq. The device will not generate interrupts for this interrupt
184number once MSI-X is enabled. The device driver is responsible for
185keeping track of the interrupts assigned to the MSI-X vectors so it can
186free them again later.
187
188Device drivers should normally call this function once per device
189during the initialization phase.
190
191It is ideal if drivers can cope with a variable number of MSI-X interrupts,
192there are many reasons why the platform may not be able to provide the
193exact number a driver asks for.
194
195A request loop to achieve that might look like:
196
197static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
198{
199 while (nvec >= FOO_DRIVER_MINIMUM_NVEC) {
200 rc = pci_enable_msix(adapter->pdev,
201 adapter->msix_entries, nvec);
202 if (rc > 0)
203 nvec = rc;
204 else
205 return rc;
206 }
207
208 return -ENOSPC;
209}
210
2114.3.2 pci_disable_msix
280 212
281void pci_disable_msix(struct pci_dev *dev) 213void pci_disable_msix(struct pci_dev *dev)
282 214
283This API should always be used to undo the effect of pci_enable_msix() 215This API should be used to undo the effect of pci_enable_msix(). It frees
284when a device driver is unloading. Note that a device driver should 216the previously allocated message signaled interrupts. The interrupts may
285always call free_irq() on all MSI-X vectors it has done request_irq() 217subsequently be assigned to another device, so drivers should not cache
286on before calling this API. Failure to do so results in a BUG_ON() and 218the value of the 'vector' elements over a call to pci_disable_msix().
287a device will be left with MSI-X enabled and leaks its vectors. 219
288 220A device driver must always call free_irq() on the interrupt(s)
2895.3.4 MSI-X mode vs. legacy mode diagram 221for which it has called request_irq() before calling this function.
290 222Failure to do so will result in a BUG_ON(), the device will be left with
291The below diagram shows the events which switch the interrupt 223MSI enabled and will leak its vector.
292mode on the MSI-X capable device function between MSI-X mode and 224
293PIN-IRQ assertion mode (legacy). 2254.3.3 The MSI-X Table
294 226
295 ------------ pci_enable_msix(,,n) ------------------------ 227The MSI-X capability specifies a BAR and offset within that BAR for the
296 | | <=============== | | 228MSI-X Table. This address is mapped by the PCI subsystem, and should not
297 | MSI-X MODE | | PIN-IRQ ASSERTION MODE | 229be accessed directly by the device driver. If the driver wishes to
298 | | ===============> | | 230mask or unmask an interrupt, it should call disable_irq() / enable_irq().
299 ------------ pci_disable_msix ------------------------ 231
300 2324.4 Handling devices implementing both MSI and MSI-X capabilities
301Figure 2. MSI-X Mode vs. Legacy Mode 233
302 234If a device implements both MSI and MSI-X capabilities, it can
303In Figure 2, a device operates by default in legacy mode. A 235run in either MSI mode or MSI-X mode but not both simultaneously.
304successful MSI-X request (using pci_enable_msix()) switches a 236This is a requirement of the PCI spec, and it is enforced by the
305device's interrupt mode to MSI-X mode. A pre-assigned IOAPIC vector 237PCI layer. Calling pci_enable_msi() when MSI-X is already enabled or
306stored in dev->irq will be saved by the PCI subsystem; however, 238pci_enable_msix() when MSI is already enabled will result in an error.
307unlike MSI mode, the PCI subsystem will not replace dev->irq with 239If a device driver wishes to switch between MSI and MSI-X at runtime,
308assigned MSI-X vector because the PCI subsystem already writes the 1:1 240it must first quiesce the device, then switch it back to pin-interrupt
309vector-to-entry mapping into the field 'vector' of each element 241mode, before calling pci_enable_msi() or pci_enable_msix() and resuming
310specified in second argument. 242operation. This is not expected to be a common operation but may be
311 243useful for debugging or testing during development.
312To return back to its default mode, a device driver should always call 244
313pci_disable_msix() to undo the effect of pci_enable_msix(). Note that 2454.5 Considerations when using MSIs
314a device driver should always call free_irq() on all MSI-X vectors it 246
315has done request_irq() on before calling pci_disable_msix(). Failure 2474.5.1 Choosing between MSI-X and MSI
316to do so results in a BUG_ON() and a device will be left with MSI-X 248
317enabled and leaks its vectors. Otherwise, the PCI subsystem switches a 249If your device supports both MSI-X and MSI capabilities, you should use
318device function's interrupt mode from MSI-X mode to legacy mode and 250the MSI-X facilities in preference to the MSI facilities. As mentioned
319marks all allocated MSI-X vectors as unused. 251above, MSI-X supports any number of interrupts between 1 and 2048.
320 252In constrast, MSI is restricted to a maximum of 32 interrupts (and
321Once being marked as unused, there is no guarantee that the PCI 253must be a power of two). In addition, the MSI interrupt vectors must
322subsystem will reserve these MSI-X vectors for a device. Depending on 254be allocated consecutively, so the system may not be able to allocate
323the availability of current PCI vector resources and the number of 255as many vectors for MSI as it could for MSI-X. On some platforms, MSI
324MSI/MSI-X requests from other drivers, these MSI-X vectors may be 256interrupts must all be targetted at the same set of CPUs whereas MSI-X
325re-assigned. 257interrupts can all be targetted at different CPUs.
326 258
327For the case where the PCI subsystem re-assigned these MSI-X vectors 2594.5.2 Spinlocks
328to other drivers, a request to switch back to MSI-X mode may result 260
329being assigned with another set of MSI-X vectors or a failure if no 261Most device drivers have a per-device spinlock which is taken in the
330more vectors are available. 262interrupt handler. With pin-based interrupts or a single MSI, it is not
331 263necessary to disable interrupts (Linux guarantees the same interrupt will
3325.4 Handling function implementing both MSI and MSI-X capabilities 264not be re-entered). If a device uses multiple interrupts, the driver
333 265must disable interrupts while the lock is held. If the device sends
334For the case where a function implements both MSI and MSI-X 266a different interrupt, the driver will deadlock trying to recursively
335capabilities, the PCI subsystem enables a device to run either in MSI 267acquire the spinlock.
336mode or MSI-X mode but not both. A device driver determines whether it 268
337wants MSI or MSI-X enabled on its hardware device. Once a device 269There are two solutions. The first is to take the lock with
338driver requests for MSI, for example, it is prohibited from requesting 270spin_lock_irqsave() or spin_lock_irq() (see
339MSI-X; in other words, a device driver is not permitted to ping-pong 271Documentation/DocBook/kernel-locking). The second is to specify
340between MSI mod MSI-X mode during a run-time. 272IRQF_DISABLED to request_irq() so that the kernel runs the entire
341 273interrupt routine with interrupts disabled.
3425.5 Hardware requirements for MSI/MSI-X support 274
343 275If your MSI interrupt routine does not hold the lock for the whole time
344MSI/MSI-X support requires support from both system hardware and 276it is running, the first solution may be best. The second solution is
345individual hardware device functions. 277normally preferred as it avoids making two transitions from interrupt
346 278disabled to enabled and back again.
3475.5.1 Required x86 hardware support 279
348 2804.6 How to tell whether MSI/MSI-X is enabled on a device
349Since the target of MSI address is the local APIC CPU, enabling 281
350MSI/MSI-X support in the Linux kernel is dependent on whether existing 282Using 'lspci -v' (as root) may show some devices with "MSI", "Message
351system hardware supports local APIC. Users should verify that their 283Signalled Interrupts" or "MSI-X" capabilities. Each of these capabilities
352system supports local APIC operation by testing that it runs when 284has an 'Enable' flag which will be followed with either "+" (enabled)
353CONFIG_X86_LOCAL_APIC=y. 285or "-" (disabled).
354 286
355In SMP environment, CONFIG_X86_LOCAL_APIC is automatically set; 287
356however, in UP environment, users must manually set 2885. MSI quirks
357CONFIG_X86_LOCAL_APIC. Once CONFIG_X86_LOCAL_APIC=y, setting 289
358CONFIG_PCI_MSI enables the VECTOR based scheme and the option for 290Several PCI chipsets or devices are known not to support MSIs.
359MSI-capable device drivers to selectively enable MSI/MSI-X. 291The PCI stack provides three ways to disable MSIs:
360 292
361Note that CONFIG_X86_IO_APIC setting is irrelevant because MSI/MSI-X 2931. globally
362vector is allocated new during runtime and MSI/MSI-X support does not 2942. on all devices behind a specific bridge
363depend on BIOS support. This key independency enables MSI/MSI-X 2953. on a single device
364support on future IOxAPIC free platforms. 296
365 2975.1. Disabling MSIs globally
3665.5.2 Device hardware support 298
367 299Some host chipsets simply don't support MSIs properly. If we're
368The hardware device function supports MSI by indicating the 300lucky, the manufacturer knows this and has indicated it in the ACPI
369MSI/MSI-X capability structure on its PCI capability list. By 301FADT table. In this case, Linux will automatically disable MSIs.
370default, this capability structure will not be initialized by 302Some boards don't include this information in the table and so we have
371the kernel to enable MSI during the system boot. In other words, 303to detect them ourselves. The complete list of these is found near the
372the device function is running on its default pin assertion mode. 304quirk_disable_all_msi() function in drivers/pci/quirks.c.
373Note that in many cases the hardware supporting MSI have bugs, 305
374which may result in system hangs. The software driver of specific 306If you have a board which has problems with MSIs, you can pass pci=nomsi
375MSI-capable hardware is responsible for deciding whether to call 307on the kernel command line to disable MSIs on all devices. It would be
376pci_enable_msi or not. A return of zero indicates the kernel 308in your best interests to report the problem to linux-pci@vger.kernel.org
377successfully initialized the MSI/MSI-X capability structure of the 309including a full 'lspci -v' so we can add the quirks to the kernel.
378device function. The device function is now running on MSI/MSI-X mode. 310
379 3115.2. Disabling MSIs below a bridge
3805.6 How to tell whether MSI/MSI-X is enabled on device function 312
381 313Some PCI bridges are not able to route MSIs between busses properly.
382At the driver level, a return of zero from the function call of 314In this case, MSIs must be disabled on all devices behind the bridge.
383pci_enable_msi()/pci_enable_msix() indicates to a device driver that 315
384its device function is initialized successfully and ready to run in 316Some bridges allow you to enable MSIs by changing some bits in their
385MSI/MSI-X mode. 317PCI configuration space (especially the Hypertransport chipsets such
386 318as the nVidia nForce and Serverworks HT2000). As with host chipsets,
387At the user level, users can use the command 'cat /proc/interrupts' 319Linux mostly knows about them and automatically enables MSIs if it can.
388to display the vectors allocated for devices and their interrupt 320If you have a bridge which Linux doesn't yet know about, you can enable
389MSI/MSI-X modes ("PCI-MSI"/"PCI-MSI-X"). Below shows MSI mode is 321MSIs in configuration space using whatever method you know works, then
390enabled on a SCSI Adaptec 39320D Ultra320 controller. 322enable MSIs on that bridge by doing:
391 323
392 CPU0 CPU1 324 echo 1 > /sys/bus/pci/devices/$bridge/msi_bus
393 0: 324639 0 IO-APIC-edge timer 325
394 1: 1186 0 IO-APIC-edge i8042 326where $bridge is the PCI address of the bridge you've enabled (eg
395 2: 0 0 XT-PIC cascade 3270000:00:0e.0).
396 12: 2797 0 IO-APIC-edge i8042 328
397 14: 6543 0 IO-APIC-edge ide0 329To disable MSIs, echo 0 instead of 1. Changing this value should be
398 15: 1 0 IO-APIC-edge ide1 330done with caution as it can break interrupt handling for all devices
399169: 0 0 IO-APIC-level uhci-hcd 331below this bridge.
400185: 0 0 IO-APIC-level uhci-hcd 332
401193: 138 10 PCI-MSI aic79xx 333Again, please notify linux-pci@vger.kernel.org of any bridges that need
402201: 30 0 PCI-MSI aic79xx 334special handling.
403225: 30 0 IO-APIC-level aic7xxx 335
404233: 30 0 IO-APIC-level aic7xxx 3365.3. Disabling MSIs on a single device
405NMI: 0 0 337
406LOC: 324553 325068 338Some devices are known to have faulty MSI implementations. Usually this
407ERR: 0 339is handled in the individual device driver but occasionally it's necessary
408MIS: 0 340to handle this with a quirk. Some drivers have an option to disable use
409 341of MSI. While this is a convenient workaround for the driver author,
4106. MSI quirks 342it is not good practise, and should not be emulated.
411 343
412Several PCI chipsets or devices are known to not support MSI. 3445.4. Finding why MSIs are disabled on a device
413The PCI stack provides 3 possible levels of MSI disabling: 345
414* on a single device 346From the above three sections, you can see that there are many reasons
415* on all devices behind a specific bridge 347why MSIs may not be enabled for a given device. Your first step should
416* globally 348be to examine your dmesg carefully to determine whether MSIs are enabled
417 349for your machine. You should also check your .config to be sure you
4186.1. Disabling MSI on a single device 350have enabled CONFIG_PCI_MSI.
419 351
420Under some circumstances it might be required to disable MSI on a 352Then, 'lspci -t' gives the list of bridges above a device. Reading
421single device. This may be achieved by either not calling pci_enable_msi() 353/sys/bus/pci/devices/*/msi_bus will tell you whether MSI are enabled (1)
422or all, or setting the pci_dev->no_msi flag before (most of the time 354or disabled (0). If 0 is found in any of the msi_bus files belonging
423in a quirk). 355to bridges between the PCI root and the device, MSIs are disabled.
424 356
4256.2. Disabling MSI below a bridge 357It is also worth checking the device driver to see whether it supports MSIs.
426 358For example, it may contain calls to pci_enable_msi(), pci_enable_msix() or
427The vast majority of MSI quirks are required by PCI bridges not 359pci_enable_msi_block().
428being able to route MSI between busses. In this case, MSI have to be
429disabled on all devices behind this bridge. It is achieves by setting
430the PCI_BUS_FLAGS_NO_MSI flag in the pci_bus->bus_flags of the bridge
431subordinate bus. There is no need to set the same flag on bridges that
432are below the broken bridge. When pci_enable_msi() is called to enable
433MSI on a device, pci_msi_supported() takes care of checking the NO_MSI
434flag in all parent busses of the device.
435
436Some bridges actually support dynamic MSI support enabling/disabling
437by changing some bits in their PCI configuration space (especially
438the Hypertransport chipsets such as the nVidia nForce and Serverworks
439HT2000). It may then be required to update the NO_MSI flag on the
440corresponding devices in the sysfs hierarchy. To enable MSI support
441on device "0000:00:0e", do:
442
443 echo 1 > /sys/bus/pci/devices/0000:00:0e/msi_bus
444
445To disable MSI support, echo 0 instead of 1. Note that it should be
446used with caution since changing this value might break interrupts.
447
4486.3. Disabling MSI globally
449
450Some extreme cases may require to disable MSI globally on the system.
451For now, the only known case is a Serverworks PCI-X chipsets (MSI are
452not supported on several busses that are not all connected to the
453chipset in the Linux PCI hierarchy). In the vast majority of other
454cases, disabling only behind a specific bridge is enough.
455
456For debugging purpose, the user may also pass pci=nomsi on the kernel
457command-line to explicitly disable MSI globally. But, once the appro-
458priate quirks are added to the kernel, this option should not be
459required anymore.
460
4616.4. Finding why MSI cannot be enabled on a device
462
463Assuming that MSI are not enabled on a device, you should look at
464dmesg to find messages that quirks may output when disabling MSI
465on some devices, some bridges or even globally.
466Then, lspci -t gives the list of bridges above a device. Reading
467/sys/bus/pci/devices/0000:00:0e/msi_bus will tell you whether MSI
468are enabled (1) or disabled (0). In 0 is found in a single bridge
469msi_bus file above the device, MSI cannot be enabled.
470
4717. FAQ
472
473Q1. Are there any limitations on using the MSI?
474
475A1. If the PCI device supports MSI and conforms to the
476specification and the platform supports the APIC local bus,
477then using MSI should work.
478
479Q2. Will it work on all the Pentium processors (P3, P4, Xeon,
480AMD processors)? In P3 IPI's are transmitted on the APIC local
481bus and in P4 and Xeon they are transmitted on the system
482bus. Are there any implications with this?
483
484A2. MSI support enables a PCI device sending an inbound
485memory write (0xfeexxxxx as target address) on its PCI bus
486directly to the FSB. Since the message address has a
487redirection hint bit cleared, it should work.
488
489Q3. The target address 0xfeexxxxx will be translated by the
490Host Bridge into an interrupt message. Are there any
491limitations on the chipsets such as Intel 8xx, Intel e7xxx,
492or VIA?
493
494A3. If these chipsets support an inbound memory write with
495target address set as 0xfeexxxxx, as conformed to PCI
496specification 2.3 or latest, then it should work.
497
498Q4. From the driver point of view, if the MSI is lost because
499of errors occurring during inbound memory write, then it may
500wait forever. Is there a mechanism for it to recover?
501
502A4. Since the target of the transaction is an inbound memory
503write, all transaction termination conditions (Retry,
504Master-Abort, Target-Abort, or normal completion) are
505supported. A device sending an MSI must abide by all the PCI
506rules and conditions regarding that inbound memory write. So,
507if a retry is signaled it must retry, etc... We believe that
508the recommendation for Abort is also a retry (refer to PCI
509specification 2.3 or latest).
diff --git a/Documentation/PCI/pci-iov-howto.txt b/Documentation/PCI/pci-iov-howto.txt
new file mode 100644
index 000000000000..fc73ef5d65b8
--- /dev/null
+++ b/Documentation/PCI/pci-iov-howto.txt
@@ -0,0 +1,99 @@
1 PCI Express I/O Virtualization Howto
2 Copyright (C) 2009 Intel Corporation
3 Yu Zhao <yu.zhao@intel.com>
4
5
61. Overview
7
81.1 What is SR-IOV
9
10Single Root I/O Virtualization (SR-IOV) is a PCI Express Extended
11capability which makes one physical device appear as multiple virtual
12devices. The physical device is referred to as Physical Function (PF)
13while the virtual devices are referred to as Virtual Functions (VF).
14Allocation of the VF can be dynamically controlled by the PF via
15registers encapsulated in the capability. By default, this feature is
16not enabled and the PF behaves as traditional PCIe device. Once it's
17turned on, each VF's PCI configuration space can be accessed by its own
18Bus, Device and Function Number (Routing ID). And each VF also has PCI
19Memory Space, which is used to map its register set. VF device driver
20operates on the register set so it can be functional and appear as a
21real existing PCI device.
22
232. User Guide
24
252.1 How can I enable SR-IOV capability
26
27The device driver (PF driver) will control the enabling and disabling
28of the capability via API provided by SR-IOV core. If the hardware
29has SR-IOV capability, loading its PF driver would enable it and all
30VFs associated with the PF.
31
322.2 How can I use the Virtual Functions
33
34The VF is treated as hot-plugged PCI devices in the kernel, so they
35should be able to work in the same way as real PCI devices. The VF
36requires device driver that is same as a normal PCI device's.
37
383. Developer Guide
39
403.1 SR-IOV API
41
42To enable SR-IOV capability:
43 int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn);
44 'nr_virtfn' is number of VFs to be enabled.
45
46To disable SR-IOV capability:
47 void pci_disable_sriov(struct pci_dev *dev);
48
49To notify SR-IOV core of Virtual Function Migration:
50 irqreturn_t pci_sriov_migration(struct pci_dev *dev);
51
523.2 Usage example
53
54Following piece of code illustrates the usage of the SR-IOV API.
55
56static int __devinit dev_probe(struct pci_dev *dev, const struct pci_device_id *id)
57{
58 pci_enable_sriov(dev, NR_VIRTFN);
59
60 ...
61
62 return 0;
63}
64
65static void __devexit dev_remove(struct pci_dev *dev)
66{
67 pci_disable_sriov(dev);
68
69 ...
70}
71
72static int dev_suspend(struct pci_dev *dev, pm_message_t state)
73{
74 ...
75
76 return 0;
77}
78
79static int dev_resume(struct pci_dev *dev)
80{
81 ...
82
83 return 0;
84}
85
86static void dev_shutdown(struct pci_dev *dev)
87{
88 ...
89}
90
91static struct pci_driver dev_driver = {
92 .name = "SR-IOV Physical Function driver",
93 .id_table = dev_id_table,
94 .probe = dev_probe,
95 .remove = __devexit_p(dev_remove),
96 .suspend = dev_suspend,
97 .resume = dev_resume,
98 .shutdown = dev_shutdown,
99};