diff options
author | FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> | 2010-03-10 18:23:42 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2010-03-12 18:52:43 -0500 |
commit | 216bf58f4092df33262bea498f0460657f4842a4 (patch) | |
tree | 54f802a04b87264e578b17655c4a84bed505c66e /Documentation/PCI/PCI-DMA-mapping.txt | |
parent | 5f3cd1e0bb452c31a306a3e764514ea2eaf7d2e0 (diff) |
Documentation: convert PCI-DMA-mapping.txt to use the generic DMA API
- replace the PCI DMA API (i.e. pci_dma_*) with the generic DMA API.
- make the document more generic (use the PCI specific explanation as
an example).
[akpm@linux-foundation.org: fix things Randy noticed]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: "David S. Miller" <davem@davemloft.net>
Reviewed-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/PCI/PCI-DMA-mapping.txt')
-rw-r--r-- | Documentation/PCI/PCI-DMA-mapping.txt | 352 |
1 files changed, 172 insertions, 180 deletions
diff --git a/Documentation/PCI/PCI-DMA-mapping.txt b/Documentation/PCI/PCI-DMA-mapping.txt index ecad88d9fe59..52618ab069ad 100644 --- a/Documentation/PCI/PCI-DMA-mapping.txt +++ b/Documentation/PCI/PCI-DMA-mapping.txt | |||
@@ -1,12 +1,12 @@ | |||
1 | Dynamic DMA mapping | 1 | Dynamic DMA mapping Guide |
2 | =================== | 2 | ========================= |
3 | 3 | ||
4 | David S. Miller <davem@redhat.com> | 4 | David S. Miller <davem@redhat.com> |
5 | Richard Henderson <rth@cygnus.com> | 5 | Richard Henderson <rth@cygnus.com> |
6 | Jakub Jelinek <jakub@redhat.com> | 6 | Jakub Jelinek <jakub@redhat.com> |
7 | 7 | ||
8 | This document describes the DMA mapping system in terms of the pci_ | 8 | This is a guide to device driver writers on how to use the DMA API |
9 | API. For a similar API that works for generic devices, see | 9 | with example pseudo-code. For a concise description of the API, see |
10 | DMA-API.txt. | 10 | DMA-API.txt. |
11 | 11 | ||
12 | Most of the 64bit platforms have special hardware that translates bus | 12 | Most of the 64bit platforms have special hardware that translates bus |
@@ -26,12 +26,15 @@ mapped only for the time they are actually used and unmapped after the DMA | |||
26 | transfer. | 26 | transfer. |
27 | 27 | ||
28 | The following API will work of course even on platforms where no such | 28 | The following API will work of course even on platforms where no such |
29 | hardware exists, see e.g. arch/x86/include/asm/pci.h for how it is implemented on | 29 | hardware exists. |
30 | top of the virt_to_bus interface. | 30 | |
31 | Note that the DMA API works with any bus independent of the underlying | ||
32 | microprocessor architecture. You should use the DMA API rather than | ||
33 | the bus specific DMA API (e.g. pci_dma_*). | ||
31 | 34 | ||
32 | First of all, you should make sure | 35 | First of all, you should make sure |
33 | 36 | ||
34 | #include <linux/pci.h> | 37 | #include <linux/dma-mapping.h> |
35 | 38 | ||
36 | is in your driver. This file will obtain for you the definition of the | 39 | is in your driver. This file will obtain for you the definition of the |
37 | dma_addr_t (which can hold any valid DMA address for the platform) | 40 | dma_addr_t (which can hold any valid DMA address for the platform) |
@@ -78,44 +81,43 @@ for you to DMA from/to. | |||
78 | DMA addressing limitations | 81 | DMA addressing limitations |
79 | 82 | ||
80 | Does your device have any DMA addressing limitations? For example, is | 83 | Does your device have any DMA addressing limitations? For example, is |
81 | your device only capable of driving the low order 24-bits of address | 84 | your device only capable of driving the low order 24-bits of address? |
82 | on the PCI bus for SAC DMA transfers? If so, you need to inform the | 85 | If so, you need to inform the kernel of this fact. |
83 | PCI layer of this fact. | ||
84 | 86 | ||
85 | By default, the kernel assumes that your device can address the full | 87 | By default, the kernel assumes that your device can address the full |
86 | 32-bits in a SAC cycle. For a 64-bit DAC capable device, this needs | 88 | 32-bits. For a 64-bit capable device, this needs to be increased. |
87 | to be increased. And for a device with limitations, as discussed in | 89 | And for a device with limitations, as discussed in the previous |
88 | the previous paragraph, it needs to be decreased. | 90 | paragraph, it needs to be decreased. |
89 | 91 | ||
90 | pci_alloc_consistent() by default will return 32-bit DMA addresses. | 92 | Special note about PCI: PCI-X specification requires PCI-X devices to |
91 | PCI-X specification requires PCI-X devices to support 64-bit | 93 | support 64-bit addressing (DAC) for all transactions. And at least |
92 | addressing (DAC) for all transactions. And at least one platform (SGI | 94 | one platform (SGI SN2) requires 64-bit consistent allocations to |
93 | SN2) requires 64-bit consistent allocations to operate correctly when | 95 | operate correctly when the IO bus is in PCI-X mode. |
94 | the IO bus is in PCI-X mode. Therefore, like with pci_set_dma_mask(), | 96 | |
95 | it's good practice to call pci_set_consistent_dma_mask() to set the | 97 | For correct operation, you must interrogate the kernel in your device |
96 | appropriate mask even if your device only supports 32-bit DMA | 98 | probe routine to see if the DMA controller on the machine can properly |
97 | (default) and especially if it's a PCI-X device. | 99 | support the DMA addressing limitation your device has. It is good |
98 | 100 | style to do this even if your device holds the default setting, | |
99 | For correct operation, you must interrogate the PCI layer in your | ||
100 | device probe routine to see if the PCI controller on the machine can | ||
101 | properly support the DMA addressing limitation your device has. It is | ||
102 | good style to do this even if your device holds the default setting, | ||
103 | because this shows that you did think about these issues wrt. your | 101 | because this shows that you did think about these issues wrt. your |
104 | device. | 102 | device. |
105 | 103 | ||
106 | The query is performed via a call to pci_set_dma_mask(): | 104 | The query is performed via a call to dma_set_mask(): |
107 | 105 | ||
108 | int pci_set_dma_mask(struct pci_dev *pdev, u64 device_mask); | 106 | int dma_set_mask(struct device *dev, u64 mask); |
109 | 107 | ||
110 | The query for consistent allocations is performed via a call to | 108 | The query for consistent allocations is performed via a call to |
111 | pci_set_consistent_dma_mask(): | 109 | dma_set_coherent_mask(): |
112 | 110 | ||
113 | int pci_set_consistent_dma_mask(struct pci_dev *pdev, u64 device_mask); | 111 | int dma_set_coherent_mask(struct device *dev, u64 mask); |
114 | 112 | ||
115 | Here, pdev is a pointer to the PCI device struct of your device, and | 113 | Here, dev is a pointer to the device struct of your device, and mask |
116 | device_mask is a bit mask describing which bits of a PCI address your | 114 | is a bit mask describing which bits of an address your device |
117 | device supports. It returns zero if your card can perform DMA | 115 | supports. It returns zero if your card can perform DMA properly on |
118 | properly on the machine given the address mask you provided. | 116 | the machine given the address mask you provided. In general, the |
117 | device struct of your device is embedded in the bus specific device | ||
118 | struct of your device. For example, a pointer to the device struct of | ||
119 | your PCI device is pdev->dev (pdev is a pointer to the PCI device | ||
120 | struct of your device). | ||
119 | 121 | ||
120 | If it returns non-zero, your device cannot perform DMA properly on | 122 | If it returns non-zero, your device cannot perform DMA properly on |
121 | this platform, and attempting to do so will result in undefined | 123 | this platform, and attempting to do so will result in undefined |
@@ -133,31 +135,30 @@ of your driver reports that performance is bad or that the device is not | |||
133 | even detected, you can ask them for the kernel messages to find out | 135 | even detected, you can ask them for the kernel messages to find out |
134 | exactly why. | 136 | exactly why. |
135 | 137 | ||
136 | The standard 32-bit addressing PCI device would do something like | 138 | The standard 32-bit addressing device would do something like this: |
137 | this: | ||
138 | 139 | ||
139 | if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { | 140 | if (dma_set_mask(dev, DMA_BIT_MASK(32))) { |
140 | printk(KERN_WARNING | 141 | printk(KERN_WARNING |
141 | "mydev: No suitable DMA available.\n"); | 142 | "mydev: No suitable DMA available.\n"); |
142 | goto ignore_this_device; | 143 | goto ignore_this_device; |
143 | } | 144 | } |
144 | 145 | ||
145 | Another common scenario is a 64-bit capable device. The approach | 146 | Another common scenario is a 64-bit capable device. The approach here |
146 | here is to try for 64-bit DAC addressing, but back down to a | 147 | is to try for 64-bit addressing, but back down to a 32-bit mask that |
147 | 32-bit mask should that fail. The PCI platform code may fail the | 148 | should not fail. The kernel may fail the 64-bit mask not because the |
148 | 64-bit mask not because the platform is not capable of 64-bit | 149 | platform is not capable of 64-bit addressing. Rather, it may fail in |
149 | addressing. Rather, it may fail in this case simply because | 150 | this case simply because 32-bit addressing is done more efficiently |
150 | 32-bit SAC addressing is done more efficiently than DAC addressing. | 151 | than 64-bit addressing. For example, Sparc64 PCI SAC addressing is |
151 | Sparc64 is one platform which behaves in this way. | 152 | more efficient than DAC addressing. |
152 | 153 | ||
153 | Here is how you would handle a 64-bit capable device which can drive | 154 | Here is how you would handle a 64-bit capable device which can drive |
154 | all 64-bits when accessing streaming DMA: | 155 | all 64-bits when accessing streaming DMA: |
155 | 156 | ||
156 | int using_dac; | 157 | int using_dac; |
157 | 158 | ||
158 | if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { | 159 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { |
159 | using_dac = 1; | 160 | using_dac = 1; |
160 | } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { | 161 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { |
161 | using_dac = 0; | 162 | using_dac = 0; |
162 | } else { | 163 | } else { |
163 | printk(KERN_WARNING | 164 | printk(KERN_WARNING |
@@ -170,36 +171,36 @@ the case would look like this: | |||
170 | 171 | ||
171 | int using_dac, consistent_using_dac; | 172 | int using_dac, consistent_using_dac; |
172 | 173 | ||
173 | if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { | 174 | if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { |
174 | using_dac = 1; | 175 | using_dac = 1; |
175 | consistent_using_dac = 1; | 176 | consistent_using_dac = 1; |
176 | pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); | 177 | dma_set_coherent_mask(dev, DMA_BIT_MASK(64)); |
177 | } else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { | 178 | } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { |
178 | using_dac = 0; | 179 | using_dac = 0; |
179 | consistent_using_dac = 0; | 180 | consistent_using_dac = 0; |
180 | pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); | 181 | dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); |
181 | } else { | 182 | } else { |
182 | printk(KERN_WARNING | 183 | printk(KERN_WARNING |
183 | "mydev: No suitable DMA available.\n"); | 184 | "mydev: No suitable DMA available.\n"); |
184 | goto ignore_this_device; | 185 | goto ignore_this_device; |
185 | } | 186 | } |
186 | 187 | ||
187 | pci_set_consistent_dma_mask() will always be able to set the same or a | 188 | dma_set_coherent_mask() will always be able to set the same or a |
188 | smaller mask as pci_set_dma_mask(). However for the rare case that a | 189 | smaller mask as dma_set_mask(). However for the rare case that a |
189 | device driver only uses consistent allocations, one would have to | 190 | device driver only uses consistent allocations, one would have to |
190 | check the return value from pci_set_consistent_dma_mask(). | 191 | check the return value from dma_set_coherent_mask(). |
191 | 192 | ||
192 | Finally, if your device can only drive the low 24-bits of | 193 | Finally, if your device can only drive the low 24-bits of |
193 | address during PCI bus mastering you might do something like: | 194 | address you might do something like: |
194 | 195 | ||
195 | if (pci_set_dma_mask(pdev, DMA_BIT_MASK(24))) { | 196 | if (dma_set_mask(dev, DMA_BIT_MASK(24))) { |
196 | printk(KERN_WARNING | 197 | printk(KERN_WARNING |
197 | "mydev: 24-bit DMA addressing not available.\n"); | 198 | "mydev: 24-bit DMA addressing not available.\n"); |
198 | goto ignore_this_device; | 199 | goto ignore_this_device; |
199 | } | 200 | } |
200 | 201 | ||
201 | When pci_set_dma_mask() is successful, and returns zero, the PCI layer | 202 | When dma_set_mask() is successful, and returns zero, the kernel saves |
202 | saves away this mask you have provided. The PCI layer will use this | 203 | away this mask you have provided. The kernel will use this |
203 | information later when you make DMA mappings. | 204 | information later when you make DMA mappings. |
204 | 205 | ||
205 | There is a case which we are aware of at this time, which is worth | 206 | There is a case which we are aware of at this time, which is worth |
@@ -208,7 +209,7 @@ functions (for example a sound card provides playback and record | |||
208 | functions) and the various different functions have _different_ | 209 | functions) and the various different functions have _different_ |
209 | DMA addressing limitations, you may wish to probe each mask and | 210 | DMA addressing limitations, you may wish to probe each mask and |
210 | only provide the functionality which the machine can handle. It | 211 | only provide the functionality which the machine can handle. It |
211 | is important that the last call to pci_set_dma_mask() be for the | 212 | is important that the last call to dma_set_mask() be for the |
212 | most specific mask. | 213 | most specific mask. |
213 | 214 | ||
214 | Here is pseudo-code showing how this might be done: | 215 | Here is pseudo-code showing how this might be done: |
@@ -217,17 +218,17 @@ Here is pseudo-code showing how this might be done: | |||
217 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) | 218 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) |
218 | 219 | ||
219 | struct my_sound_card *card; | 220 | struct my_sound_card *card; |
220 | struct pci_dev *pdev; | 221 | struct device *dev; |
221 | 222 | ||
222 | ... | 223 | ... |
223 | if (!pci_set_dma_mask(pdev, PLAYBACK_ADDRESS_BITS)) { | 224 | if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { |
224 | card->playback_enabled = 1; | 225 | card->playback_enabled = 1; |
225 | } else { | 226 | } else { |
226 | card->playback_enabled = 0; | 227 | card->playback_enabled = 0; |
227 | printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", | 228 | printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n", |
228 | card->name); | 229 | card->name); |
229 | } | 230 | } |
230 | if (!pci_set_dma_mask(pdev, RECORD_ADDRESS_BITS)) { | 231 | if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { |
231 | card->record_enabled = 1; | 232 | card->record_enabled = 1; |
232 | } else { | 233 | } else { |
233 | card->record_enabled = 0; | 234 | card->record_enabled = 0; |
@@ -252,8 +253,8 @@ There are two types of DMA mappings: | |||
252 | Think of "consistent" as "synchronous" or "coherent". | 253 | Think of "consistent" as "synchronous" or "coherent". |
253 | 254 | ||
254 | The current default is to return consistent memory in the low 32 | 255 | The current default is to return consistent memory in the low 32 |
255 | bits of the PCI bus space. However, for future compatibility you | 256 | bits of the bus space. However, for future compatibility you should |
256 | should set the consistent mask even if this default is fine for your | 257 | set the consistent mask even if this default is fine for your |
257 | driver. | 258 | driver. |
258 | 259 | ||
259 | Good examples of what to use consistent mappings for are: | 260 | Good examples of what to use consistent mappings for are: |
@@ -285,9 +286,9 @@ There are two types of DMA mappings: | |||
285 | found in PCI bridges (such as by reading a register's value | 286 | found in PCI bridges (such as by reading a register's value |
286 | after writing it). | 287 | after writing it). |
287 | 288 | ||
288 | - Streaming DMA mappings which are usually mapped for one DMA transfer, | 289 | - Streaming DMA mappings which are usually mapped for one DMA |
289 | unmapped right after it (unless you use pci_dma_sync_* below) and for which | 290 | transfer, unmapped right after it (unless you use dma_sync_* below) |
290 | hardware can optimize for sequential accesses. | 291 | and for which hardware can optimize for sequential accesses. |
291 | 292 | ||
292 | This of "streaming" as "asynchronous" or "outside the coherency | 293 | This of "streaming" as "asynchronous" or "outside the coherency |
293 | domain". | 294 | domain". |
@@ -302,8 +303,8 @@ There are two types of DMA mappings: | |||
302 | optimizations the hardware allows. To this end, when using | 303 | optimizations the hardware allows. To this end, when using |
303 | such mappings you must be explicit about what you want to happen. | 304 | such mappings you must be explicit about what you want to happen. |
304 | 305 | ||
305 | Neither type of DMA mapping has alignment restrictions that come | 306 | Neither type of DMA mapping has alignment restrictions that come from |
306 | from PCI, although some devices may have such restrictions. | 307 | the underlying bus, although some devices may have such restrictions. |
307 | Also, systems with caches that aren't DMA-coherent will work better | 308 | Also, systems with caches that aren't DMA-coherent will work better |
308 | when the underlying buffers don't share cache lines with other data. | 309 | when the underlying buffers don't share cache lines with other data. |
309 | 310 | ||
@@ -315,33 +316,27 @@ you should do: | |||
315 | 316 | ||
316 | dma_addr_t dma_handle; | 317 | dma_addr_t dma_handle; |
317 | 318 | ||
318 | cpu_addr = pci_alloc_consistent(pdev, size, &dma_handle); | 319 | cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); |
319 | |||
320 | where pdev is a struct pci_dev *. This may be called in interrupt context. | ||
321 | You should use dma_alloc_coherent (see DMA-API.txt) for buses | ||
322 | where devices don't have struct pci_dev (like ISA, EISA). | ||
323 | 320 | ||
324 | This argument is needed because the DMA translations may be bus | 321 | where device is a struct device *. This may be called in interrupt |
325 | specific (and often is private to the bus which the device is attached | 322 | context with the GFP_ATOMIC flag. |
326 | to). | ||
327 | 323 | ||
328 | Size is the length of the region you want to allocate, in bytes. | 324 | Size is the length of the region you want to allocate, in bytes. |
329 | 325 | ||
330 | This routine will allocate RAM for that region, so it acts similarly to | 326 | This routine will allocate RAM for that region, so it acts similarly to |
331 | __get_free_pages (but takes size instead of a page order). If your | 327 | __get_free_pages (but takes size instead of a page order). If your |
332 | driver needs regions sized smaller than a page, you may prefer using | 328 | driver needs regions sized smaller than a page, you may prefer using |
333 | the pci_pool interface, described below. | 329 | the dma_pool interface, described below. |
334 | 330 | ||
335 | The consistent DMA mapping interfaces, for non-NULL pdev, will by | 331 | The consistent DMA mapping interfaces, for non-NULL dev, will by |
336 | default return a DMA address which is SAC (Single Address Cycle) | 332 | default return a DMA address which is 32-bit addressable. Even if the |
337 | addressable. Even if the device indicates (via PCI dma mask) that it | 333 | device indicates (via DMA mask) that it may address the upper 32-bits, |
338 | may address the upper 32-bits and thus perform DAC cycles, consistent | 334 | consistent allocation will only return > 32-bit addresses for DMA if |
339 | allocation will only return > 32-bit PCI addresses for DMA if the | 335 | the consistent DMA mask has been explicitly changed via |
340 | consistent dma mask has been explicitly changed via | 336 | dma_set_coherent_mask(). This is true of the dma_pool interface as |
341 | pci_set_consistent_dma_mask(). This is true of the pci_pool interface | 337 | well. |
342 | as well. | 338 | |
343 | 339 | dma_alloc_coherent returns two values: the virtual address which you | |
344 | pci_alloc_consistent returns two values: the virtual address which you | ||
345 | can use to access it from the CPU and dma_handle which you pass to the | 340 | can use to access it from the CPU and dma_handle which you pass to the |
346 | card. | 341 | card. |
347 | 342 | ||
@@ -354,54 +349,54 @@ buffer you receive will not cross a 64K boundary. | |||
354 | 349 | ||
355 | To unmap and free such a DMA region, you call: | 350 | To unmap and free such a DMA region, you call: |
356 | 351 | ||
357 | pci_free_consistent(pdev, size, cpu_addr, dma_handle); | 352 | dma_free_coherent(dev, size, cpu_addr, dma_handle); |
358 | 353 | ||
359 | where pdev, size are the same as in the above call and cpu_addr and | 354 | where dev, size are the same as in the above call and cpu_addr and |
360 | dma_handle are the values pci_alloc_consistent returned to you. | 355 | dma_handle are the values dma_alloc_coherent returned to you. |
361 | This function may not be called in interrupt context. | 356 | This function may not be called in interrupt context. |
362 | 357 | ||
363 | If your driver needs lots of smaller memory regions, you can write | 358 | If your driver needs lots of smaller memory regions, you can write |
364 | custom code to subdivide pages returned by pci_alloc_consistent, | 359 | custom code to subdivide pages returned by dma_alloc_coherent, |
365 | or you can use the pci_pool API to do that. A pci_pool is like | 360 | or you can use the dma_pool API to do that. A dma_pool is like |
366 | a kmem_cache, but it uses pci_alloc_consistent not __get_free_pages. | 361 | a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages. |
367 | Also, it understands common hardware constraints for alignment, | 362 | Also, it understands common hardware constraints for alignment, |
368 | like queue heads needing to be aligned on N byte boundaries. | 363 | like queue heads needing to be aligned on N byte boundaries. |
369 | 364 | ||
370 | Create a pci_pool like this: | 365 | Create a dma_pool like this: |
371 | 366 | ||
372 | struct pci_pool *pool; | 367 | struct dma_pool *pool; |
373 | 368 | ||
374 | pool = pci_pool_create(name, pdev, size, align, alloc); | 369 | pool = dma_pool_create(name, dev, size, align, alloc); |
375 | 370 | ||
376 | The "name" is for diagnostics (like a kmem_cache name); pdev and size | 371 | The "name" is for diagnostics (like a kmem_cache name); dev and size |
377 | are as above. The device's hardware alignment requirement for this | 372 | are as above. The device's hardware alignment requirement for this |
378 | type of data is "align" (which is expressed in bytes, and must be a | 373 | type of data is "align" (which is expressed in bytes, and must be a |
379 | power of two). If your device has no boundary crossing restrictions, | 374 | power of two). If your device has no boundary crossing restrictions, |
380 | pass 0 for alloc; passing 4096 says memory allocated from this pool | 375 | pass 0 for alloc; passing 4096 says memory allocated from this pool |
381 | must not cross 4KByte boundaries (but at that time it may be better to | 376 | must not cross 4KByte boundaries (but at that time it may be better to |
382 | go for pci_alloc_consistent directly instead). | 377 | go for dma_alloc_coherent directly instead). |
383 | 378 | ||
384 | Allocate memory from a pci pool like this: | 379 | Allocate memory from a dma pool like this: |
385 | 380 | ||
386 | cpu_addr = pci_pool_alloc(pool, flags, &dma_handle); | 381 | cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); |
387 | 382 | ||
388 | flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor | 383 | flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor |
389 | holding SMP locks), SLAB_ATOMIC otherwise. Like pci_alloc_consistent, | 384 | holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent, |
390 | this returns two values, cpu_addr and dma_handle. | 385 | this returns two values, cpu_addr and dma_handle. |
391 | 386 | ||
392 | Free memory that was allocated from a pci_pool like this: | 387 | Free memory that was allocated from a dma_pool like this: |
393 | 388 | ||
394 | pci_pool_free(pool, cpu_addr, dma_handle); | 389 | dma_pool_free(pool, cpu_addr, dma_handle); |
395 | 390 | ||
396 | where pool is what you passed to pci_pool_alloc, and cpu_addr and | 391 | where pool is what you passed to dma_pool_alloc, and cpu_addr and |
397 | dma_handle are the values pci_pool_alloc returned. This function | 392 | dma_handle are the values dma_pool_alloc returned. This function |
398 | may be called in interrupt context. | 393 | may be called in interrupt context. |
399 | 394 | ||
400 | Destroy a pci_pool by calling: | 395 | Destroy a dma_pool by calling: |
401 | 396 | ||
402 | pci_pool_destroy(pool); | 397 | dma_pool_destroy(pool); |
403 | 398 | ||
404 | Make sure you've called pci_pool_free for all memory allocated | 399 | Make sure you've called dma_pool_free for all memory allocated |
405 | from a pool before you destroy the pool. This function may not | 400 | from a pool before you destroy the pool. This function may not |
406 | be called in interrupt context. | 401 | be called in interrupt context. |
407 | 402 | ||
@@ -411,15 +406,15 @@ The interfaces described in subsequent portions of this document | |||
411 | take a DMA direction argument, which is an integer and takes on | 406 | take a DMA direction argument, which is an integer and takes on |
412 | one of the following values: | 407 | one of the following values: |
413 | 408 | ||
414 | PCI_DMA_BIDIRECTIONAL | 409 | DMA_BIDIRECTIONAL |
415 | PCI_DMA_TODEVICE | 410 | DMA_TO_DEVICE |
416 | PCI_DMA_FROMDEVICE | 411 | DMA_FROM_DEVICE |
417 | PCI_DMA_NONE | 412 | DMA_NONE |
418 | 413 | ||
419 | One should provide the exact DMA direction if you know it. | 414 | One should provide the exact DMA direction if you know it. |
420 | 415 | ||
421 | PCI_DMA_TODEVICE means "from main memory to the PCI device" | 416 | DMA_TO_DEVICE means "from main memory to the device" |
422 | PCI_DMA_FROMDEVICE means "from the PCI device to main memory" | 417 | DMA_FROM_DEVICE means "from the device to main memory" |
423 | It is the direction in which the data moves during the DMA | 418 | It is the direction in which the data moves during the DMA |
424 | transfer. | 419 | transfer. |
425 | 420 | ||
@@ -427,12 +422,12 @@ You are _strongly_ encouraged to specify this as precisely | |||
427 | as you possibly can. | 422 | as you possibly can. |
428 | 423 | ||
429 | If you absolutely cannot know the direction of the DMA transfer, | 424 | If you absolutely cannot know the direction of the DMA transfer, |
430 | specify PCI_DMA_BIDIRECTIONAL. It means that the DMA can go in | 425 | specify DMA_BIDIRECTIONAL. It means that the DMA can go in |
431 | either direction. The platform guarantees that you may legally | 426 | either direction. The platform guarantees that you may legally |
432 | specify this, and that it will work, but this may be at the | 427 | specify this, and that it will work, but this may be at the |
433 | cost of performance for example. | 428 | cost of performance for example. |
434 | 429 | ||
435 | The value PCI_DMA_NONE is to be used for debugging. One can | 430 | The value DMA_NONE is to be used for debugging. One can |
436 | hold this in a data structure before you come to know the | 431 | hold this in a data structure before you come to know the |
437 | precise direction, and this will help catch cases where your | 432 | precise direction, and this will help catch cases where your |
438 | direction tracking logic has failed to set things up properly. | 433 | direction tracking logic has failed to set things up properly. |
@@ -442,21 +437,21 @@ potential platform-specific optimizations of such) is for debugging. | |||
442 | Some platforms actually have a write permission boolean which DMA | 437 | Some platforms actually have a write permission boolean which DMA |
443 | mappings can be marked with, much like page protections in the user | 438 | mappings can be marked with, much like page protections in the user |
444 | program address space. Such platforms can and do report errors in the | 439 | program address space. Such platforms can and do report errors in the |
445 | kernel logs when the PCI controller hardware detects violation of the | 440 | kernel logs when the DMA controller hardware detects violation of the |
446 | permission setting. | 441 | permission setting. |
447 | 442 | ||
448 | Only streaming mappings specify a direction, consistent mappings | 443 | Only streaming mappings specify a direction, consistent mappings |
449 | implicitly have a direction attribute setting of | 444 | implicitly have a direction attribute setting of |
450 | PCI_DMA_BIDIRECTIONAL. | 445 | DMA_BIDIRECTIONAL. |
451 | 446 | ||
452 | The SCSI subsystem tells you the direction to use in the | 447 | The SCSI subsystem tells you the direction to use in the |
453 | 'sc_data_direction' member of the SCSI command your driver is | 448 | 'sc_data_direction' member of the SCSI command your driver is |
454 | working on. | 449 | working on. |
455 | 450 | ||
456 | For Networking drivers, it's a rather simple affair. For transmit | 451 | For Networking drivers, it's a rather simple affair. For transmit |
457 | packets, map/unmap them with the PCI_DMA_TODEVICE direction | 452 | packets, map/unmap them with the DMA_TO_DEVICE direction |
458 | specifier. For receive packets, just the opposite, map/unmap them | 453 | specifier. For receive packets, just the opposite, map/unmap them |
459 | with the PCI_DMA_FROMDEVICE direction specifier. | 454 | with the DMA_FROM_DEVICE direction specifier. |
460 | 455 | ||
461 | Using Streaming DMA mappings | 456 | Using Streaming DMA mappings |
462 | 457 | ||
@@ -467,43 +462,43 @@ scatterlist. | |||
467 | 462 | ||
468 | To map a single region, you do: | 463 | To map a single region, you do: |
469 | 464 | ||
470 | struct pci_dev *pdev = mydev->pdev; | 465 | struct device *dev = &my_dev->dev; |
471 | dma_addr_t dma_handle; | 466 | dma_addr_t dma_handle; |
472 | void *addr = buffer->ptr; | 467 | void *addr = buffer->ptr; |
473 | size_t size = buffer->len; | 468 | size_t size = buffer->len; |
474 | 469 | ||
475 | dma_handle = pci_map_single(pdev, addr, size, direction); | 470 | dma_handle = dma_map_single(dev, addr, size, direction); |
476 | 471 | ||
477 | and to unmap it: | 472 | and to unmap it: |
478 | 473 | ||
479 | pci_unmap_single(pdev, dma_handle, size, direction); | 474 | dma_unmap_single(dev, dma_handle, size, direction); |
480 | 475 | ||
481 | You should call pci_unmap_single when the DMA activity is finished, e.g. | 476 | You should call dma_unmap_single when the DMA activity is finished, e.g. |
482 | from the interrupt which told you that the DMA transfer is done. | 477 | from the interrupt which told you that the DMA transfer is done. |
483 | 478 | ||
484 | Using cpu pointers like this for single mappings has a disadvantage, | 479 | Using cpu pointers like this for single mappings has a disadvantage, |
485 | you cannot reference HIGHMEM memory in this way. Thus, there is a | 480 | you cannot reference HIGHMEM memory in this way. Thus, there is a |
486 | map/unmap interface pair akin to pci_{map,unmap}_single. These | 481 | map/unmap interface pair akin to dma_{map,unmap}_single. These |
487 | interfaces deal with page/offset pairs instead of cpu pointers. | 482 | interfaces deal with page/offset pairs instead of cpu pointers. |
488 | Specifically: | 483 | Specifically: |
489 | 484 | ||
490 | struct pci_dev *pdev = mydev->pdev; | 485 | struct device *dev = &my_dev->dev; |
491 | dma_addr_t dma_handle; | 486 | dma_addr_t dma_handle; |
492 | struct page *page = buffer->page; | 487 | struct page *page = buffer->page; |
493 | unsigned long offset = buffer->offset; | 488 | unsigned long offset = buffer->offset; |
494 | size_t size = buffer->len; | 489 | size_t size = buffer->len; |
495 | 490 | ||
496 | dma_handle = pci_map_page(pdev, page, offset, size, direction); | 491 | dma_handle = dma_map_page(dev, page, offset, size, direction); |
497 | 492 | ||
498 | ... | 493 | ... |
499 | 494 | ||
500 | pci_unmap_page(pdev, dma_handle, size, direction); | 495 | dma_unmap_page(dev, dma_handle, size, direction); |
501 | 496 | ||
502 | Here, "offset" means byte offset within the given page. | 497 | Here, "offset" means byte offset within the given page. |
503 | 498 | ||
504 | With scatterlists, you map a region gathered from several regions by: | 499 | With scatterlists, you map a region gathered from several regions by: |
505 | 500 | ||
506 | int i, count = pci_map_sg(pdev, sglist, nents, direction); | 501 | int i, count = dma_map_sg(dev, sglist, nents, direction); |
507 | struct scatterlist *sg; | 502 | struct scatterlist *sg; |
508 | 503 | ||
509 | for_each_sg(sglist, sg, count, i) { | 504 | for_each_sg(sglist, sg, count, i) { |
@@ -527,16 +522,16 @@ accessed sg->address and sg->length as shown above. | |||
527 | 522 | ||
528 | To unmap a scatterlist, just call: | 523 | To unmap a scatterlist, just call: |
529 | 524 | ||
530 | pci_unmap_sg(pdev, sglist, nents, direction); | 525 | dma_unmap_sg(dev, sglist, nents, direction); |
531 | 526 | ||
532 | Again, make sure DMA activity has already finished. | 527 | Again, make sure DMA activity has already finished. |
533 | 528 | ||
534 | PLEASE NOTE: The 'nents' argument to the pci_unmap_sg call must be | 529 | PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be |
535 | the _same_ one you passed into the pci_map_sg call, | 530 | the _same_ one you passed into the dma_map_sg call, |
536 | it should _NOT_ be the 'count' value _returned_ from the | 531 | it should _NOT_ be the 'count' value _returned_ from the |
537 | pci_map_sg call. | 532 | dma_map_sg call. |
538 | 533 | ||
539 | Every pci_map_{single,sg} call should have its pci_unmap_{single,sg} | 534 | Every dma_map_{single,sg} call should have its dma_unmap_{single,sg} |
540 | counterpart, because the bus address space is a shared resource (although | 535 | counterpart, because the bus address space is a shared resource (although |
541 | in some ports the mapping is per each BUS so less devices contend for the | 536 | in some ports the mapping is per each BUS so less devices contend for the |
542 | same bus address space) and you could render the machine unusable by eating | 537 | same bus address space) and you could render the machine unusable by eating |
@@ -547,14 +542,14 @@ the data in between the DMA transfers, the buffer needs to be synced | |||
547 | properly in order for the cpu and device to see the most uptodate and | 542 | properly in order for the cpu and device to see the most uptodate and |
548 | correct copy of the DMA buffer. | 543 | correct copy of the DMA buffer. |
549 | 544 | ||
550 | So, firstly, just map it with pci_map_{single,sg}, and after each DMA | 545 | So, firstly, just map it with dma_map_{single,sg}, and after each DMA |
551 | transfer call either: | 546 | transfer call either: |
552 | 547 | ||
553 | pci_dma_sync_single_for_cpu(pdev, dma_handle, size, direction); | 548 | dma_sync_single_for_cpu(dev, dma_handle, size, direction); |
554 | 549 | ||
555 | or: | 550 | or: |
556 | 551 | ||
557 | pci_dma_sync_sg_for_cpu(pdev, sglist, nents, direction); | 552 | dma_sync_sg_for_cpu(dev, sglist, nents, direction); |
558 | 553 | ||
559 | as appropriate. | 554 | as appropriate. |
560 | 555 | ||
@@ -562,27 +557,27 @@ Then, if you wish to let the device get at the DMA area again, | |||
562 | finish accessing the data with the cpu, and then before actually | 557 | finish accessing the data with the cpu, and then before actually |
563 | giving the buffer to the hardware call either: | 558 | giving the buffer to the hardware call either: |
564 | 559 | ||
565 | pci_dma_sync_single_for_device(pdev, dma_handle, size, direction); | 560 | dma_sync_single_for_device(dev, dma_handle, size, direction); |
566 | 561 | ||
567 | or: | 562 | or: |
568 | 563 | ||
569 | pci_dma_sync_sg_for_device(dev, sglist, nents, direction); | 564 | dma_sync_sg_for_device(dev, sglist, nents, direction); |
570 | 565 | ||
571 | as appropriate. | 566 | as appropriate. |
572 | 567 | ||
573 | After the last DMA transfer call one of the DMA unmap routines | 568 | After the last DMA transfer call one of the DMA unmap routines |
574 | pci_unmap_{single,sg}. If you don't touch the data from the first pci_map_* | 569 | dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_* |
575 | call till pci_unmap_*, then you don't have to call the pci_dma_sync_* | 570 | call till dma_unmap_*, then you don't have to call the dma_sync_* |
576 | routines at all. | 571 | routines at all. |
577 | 572 | ||
578 | Here is pseudo code which shows a situation in which you would need | 573 | Here is pseudo code which shows a situation in which you would need |
579 | to use the pci_dma_sync_*() interfaces. | 574 | to use the dma_sync_*() interfaces. |
580 | 575 | ||
581 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) | 576 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) |
582 | { | 577 | { |
583 | dma_addr_t mapping; | 578 | dma_addr_t mapping; |
584 | 579 | ||
585 | mapping = pci_map_single(cp->pdev, buffer, len, PCI_DMA_FROMDEVICE); | 580 | mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); |
586 | 581 | ||
587 | cp->rx_buf = buffer; | 582 | cp->rx_buf = buffer; |
588 | cp->rx_len = len; | 583 | cp->rx_len = len; |
@@ -606,25 +601,25 @@ to use the pci_dma_sync_*() interfaces. | |||
606 | * the DMA transfer with the CPU first | 601 | * the DMA transfer with the CPU first |
607 | * so that we see updated contents. | 602 | * so that we see updated contents. |
608 | */ | 603 | */ |
609 | pci_dma_sync_single_for_cpu(cp->pdev, cp->rx_dma, | 604 | dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, |
610 | cp->rx_len, | 605 | cp->rx_len, |
611 | PCI_DMA_FROMDEVICE); | 606 | DMA_FROM_DEVICE); |
612 | 607 | ||
613 | /* Now it is safe to examine the buffer. */ | 608 | /* Now it is safe to examine the buffer. */ |
614 | hp = (struct my_card_header *) cp->rx_buf; | 609 | hp = (struct my_card_header *) cp->rx_buf; |
615 | if (header_is_ok(hp)) { | 610 | if (header_is_ok(hp)) { |
616 | pci_unmap_single(cp->pdev, cp->rx_dma, cp->rx_len, | 611 | dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, |
617 | PCI_DMA_FROMDEVICE); | 612 | DMA_FROM_DEVICE); |
618 | pass_to_upper_layers(cp->rx_buf); | 613 | pass_to_upper_layers(cp->rx_buf); |
619 | make_and_setup_new_rx_buf(cp); | 614 | make_and_setup_new_rx_buf(cp); |
620 | } else { | 615 | } else { |
621 | /* Just sync the buffer and give it back | 616 | /* Just sync the buffer and give it back |
622 | * to the card. | 617 | * to the card. |
623 | */ | 618 | */ |
624 | pci_dma_sync_single_for_device(cp->pdev, | 619 | dma_sync_single_for_device(&cp->dev, |
625 | cp->rx_dma, | 620 | cp->rx_dma, |
626 | cp->rx_len, | 621 | cp->rx_len, |
627 | PCI_DMA_FROMDEVICE); | 622 | DMA_FROM_DEVICE); |
628 | give_rx_buf_to_card(cp); | 623 | give_rx_buf_to_card(cp); |
629 | } | 624 | } |
630 | } | 625 | } |
@@ -634,19 +629,19 @@ Drivers converted fully to this interface should not use virt_to_bus any | |||
634 | longer, nor should they use bus_to_virt. Some drivers have to be changed a | 629 | longer, nor should they use bus_to_virt. Some drivers have to be changed a |
635 | little bit, because there is no longer an equivalent to bus_to_virt in the | 630 | little bit, because there is no longer an equivalent to bus_to_virt in the |
636 | dynamic DMA mapping scheme - you have to always store the DMA addresses | 631 | dynamic DMA mapping scheme - you have to always store the DMA addresses |
637 | returned by the pci_alloc_consistent, pci_pool_alloc, and pci_map_single | 632 | returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single |
638 | calls (pci_map_sg stores them in the scatterlist itself if the platform | 633 | calls (dma_map_sg stores them in the scatterlist itself if the platform |
639 | supports dynamic DMA mapping in hardware) in your driver structures and/or | 634 | supports dynamic DMA mapping in hardware) in your driver structures and/or |
640 | in the card registers. | 635 | in the card registers. |
641 | 636 | ||
642 | All PCI drivers should be using these interfaces with no exceptions. | 637 | All drivers should be using these interfaces with no exceptions. It |
643 | It is planned to completely remove virt_to_bus() and bus_to_virt() as | 638 | is planned to completely remove virt_to_bus() and bus_to_virt() as |
644 | they are entirely deprecated. Some ports already do not provide these | 639 | they are entirely deprecated. Some ports already do not provide these |
645 | as it is impossible to correctly support them. | 640 | as it is impossible to correctly support them. |
646 | 641 | ||
647 | Optimizing Unmap State Space Consumption | 642 | Optimizing Unmap State Space Consumption |
648 | 643 | ||
649 | On many platforms, pci_unmap_{single,page}() is simply a nop. | 644 | On many platforms, dma_unmap_{single,page}() is simply a nop. |
650 | Therefore, keeping track of the mapping address and length is a waste | 645 | Therefore, keeping track of the mapping address and length is a waste |
651 | of space. Instead of filling your drivers up with ifdefs and the like | 646 | of space. Instead of filling your drivers up with ifdefs and the like |
652 | to "work around" this (which would defeat the whole purpose of a | 647 | to "work around" this (which would defeat the whole purpose of a |
@@ -655,7 +650,7 @@ portable API) the following facilities are provided. | |||
655 | Actually, instead of describing the macros one by one, we'll | 650 | Actually, instead of describing the macros one by one, we'll |
656 | transform some example code. | 651 | transform some example code. |
657 | 652 | ||
658 | 1) Use DECLARE_PCI_UNMAP_{ADDR,LEN} in state saving structures. | 653 | 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. |
659 | Example, before: | 654 | Example, before: |
660 | 655 | ||
661 | struct ring_state { | 656 | struct ring_state { |
@@ -668,14 +663,11 @@ transform some example code. | |||
668 | 663 | ||
669 | struct ring_state { | 664 | struct ring_state { |
670 | struct sk_buff *skb; | 665 | struct sk_buff *skb; |
671 | DECLARE_PCI_UNMAP_ADDR(mapping) | 666 | DEFINE_DMA_UNMAP_ADDR(mapping); |
672 | DECLARE_PCI_UNMAP_LEN(len) | 667 | DEFINE_DMA_UNMAP_LEN(len); |
673 | }; | 668 | }; |
674 | 669 | ||
675 | NOTE: DO NOT put a semicolon at the end of the DECLARE_*() | 670 | 2) Use dma_unmap_{addr,len}_set to set these values. |
676 | macro. | ||
677 | |||
678 | 2) Use pci_unmap_{addr,len}_set to set these values. | ||
679 | Example, before: | 671 | Example, before: |
680 | 672 | ||
681 | ringp->mapping = FOO; | 673 | ringp->mapping = FOO; |
@@ -683,21 +675,21 @@ transform some example code. | |||
683 | 675 | ||
684 | after: | 676 | after: |
685 | 677 | ||
686 | pci_unmap_addr_set(ringp, mapping, FOO); | 678 | dma_unmap_addr_set(ringp, mapping, FOO); |
687 | pci_unmap_len_set(ringp, len, BAR); | 679 | dma_unmap_len_set(ringp, len, BAR); |
688 | 680 | ||
689 | 3) Use pci_unmap_{addr,len} to access these values. | 681 | 3) Use dma_unmap_{addr,len} to access these values. |
690 | Example, before: | 682 | Example, before: |
691 | 683 | ||
692 | pci_unmap_single(pdev, ringp->mapping, ringp->len, | 684 | dma_unmap_single(dev, ringp->mapping, ringp->len, |
693 | PCI_DMA_FROMDEVICE); | 685 | DMA_FROM_DEVICE); |
694 | 686 | ||
695 | after: | 687 | after: |
696 | 688 | ||
697 | pci_unmap_single(pdev, | 689 | dma_unmap_single(dev, |
698 | pci_unmap_addr(ringp, mapping), | 690 | dma_unmap_addr(ringp, mapping), |
699 | pci_unmap_len(ringp, len), | 691 | dma_unmap_len(ringp, len), |
700 | PCI_DMA_FROMDEVICE); | 692 | DMA_FROM_DEVICE); |
701 | 693 | ||
702 | It really should be self-explanatory. We treat the ADDR and LEN | 694 | It really should be self-explanatory. We treat the ADDR and LEN |
703 | separately, because it is possible for an implementation to only | 695 | separately, because it is possible for an implementation to only |
@@ -732,15 +724,15 @@ to "Closing". | |||
732 | DMA address space is limited on some architectures and an allocation | 724 | DMA address space is limited on some architectures and an allocation |
733 | failure can be determined by: | 725 | failure can be determined by: |
734 | 726 | ||
735 | - checking if pci_alloc_consistent returns NULL or pci_map_sg returns 0 | 727 | - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0 |
736 | 728 | ||
737 | - checking the returned dma_addr_t of pci_map_single and pci_map_page | 729 | - checking the returned dma_addr_t of dma_map_single and dma_map_page |
738 | by using pci_dma_mapping_error(): | 730 | by using dma_mapping_error(): |
739 | 731 | ||
740 | dma_addr_t dma_handle; | 732 | dma_addr_t dma_handle; |
741 | 733 | ||
742 | dma_handle = pci_map_single(pdev, addr, size, direction); | 734 | dma_handle = dma_map_single(dev, addr, size, direction); |
743 | if (pci_dma_mapping_error(pdev, dma_handle)) { | 735 | if (dma_mapping_error(dev, dma_handle)) { |
744 | /* | 736 | /* |
745 | * reduce current DMA mapping usage, | 737 | * reduce current DMA mapping usage, |
746 | * delay and try again later or | 738 | * delay and try again later or |