diff options
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/DMA-mapping.txt | 103 |
1 files changed, 0 insertions, 103 deletions
diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt index 028614cdd062..e07f2530326b 100644 --- a/Documentation/DMA-mapping.txt +++ b/Documentation/DMA-mapping.txt | |||
@@ -664,109 +664,6 @@ It is that simple. | |||
664 | Well, not for some odd devices. See the next section for information | 664 | Well, not for some odd devices. See the next section for information |
665 | about that. | 665 | about that. |
666 | 666 | ||
667 | DAC Addressing for Address Space Hungry Devices | ||
668 | |||
669 | There exists a class of devices which do not mesh well with the PCI | ||
670 | DMA mapping API. By definition these "mappings" are a finite | ||
671 | resource. The number of total available mappings per bus is platform | ||
672 | specific, but there will always be a reasonable amount. | ||
673 | |||
674 | What is "reasonable"? Reasonable means that networking and block I/O | ||
675 | devices need not worry about using too many mappings. | ||
676 | |||
677 | As an example of a problematic device, consider compute cluster cards. | ||
678 | They can potentially need to access gigabytes of memory at once via | ||
679 | DMA. Dynamic mappings are unsuitable for this kind of access pattern. | ||
680 | |||
681 | To this end we've provided a small API by which a device driver | ||
682 | may use DAC cycles to directly address all of physical memory. | ||
683 | Not all platforms support this, but most do. It is easy to determine | ||
684 | whether the platform will work properly at probe time. | ||
685 | |||
686 | First, understand that there may be a SEVERE performance penalty for | ||
687 | using these interfaces on some platforms. Therefore, you MUST only | ||
688 | use these interfaces if it is absolutely required. %99 of devices can | ||
689 | use the normal APIs without any problems. | ||
690 | |||
691 | Note that for streaming type mappings you must either use these | ||
692 | interfaces, or the dynamic mapping interfaces above. You may not mix | ||
693 | usage of both for the same device. Such an act is illegal and is | ||
694 | guaranteed to put a banana in your tailpipe. | ||
695 | |||
696 | However, consistent mappings may in fact be used in conjunction with | ||
697 | these interfaces. Remember that, as defined, consistent mappings are | ||
698 | always going to be SAC addressable. | ||
699 | |||
700 | The first thing your driver needs to do is query the PCI platform | ||
701 | layer if it is capable of handling your devices DAC addressing | ||
702 | capabilities: | ||
703 | |||
704 | int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask); | ||
705 | |||
706 | You may not use the following interfaces if this routine fails. | ||
707 | |||
708 | Next, DMA addresses using this API are kept track of using the | ||
709 | dma64_addr_t type. It is guaranteed to be big enough to hold any | ||
710 | DAC address the platform layer will give to you from the following | ||
711 | routines. If you have consistent mappings as well, you still | ||
712 | use plain dma_addr_t to keep track of those. | ||
713 | |||
714 | All mappings obtained here will be direct. The mappings are not | ||
715 | translated, and this is the purpose of this dialect of the DMA API. | ||
716 | |||
717 | All routines work with page/offset pairs. This is the _ONLY_ way to | ||
718 | portably refer to any piece of memory. If you have a cpu pointer | ||
719 | (which may be validly DMA'd too) you may easily obtain the page | ||
720 | and offset using something like this: | ||
721 | |||
722 | struct page *page = virt_to_page(ptr); | ||
723 | unsigned long offset = offset_in_page(ptr); | ||
724 | |||
725 | Here are the interfaces: | ||
726 | |||
727 | dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev, | ||
728 | struct page *page, | ||
729 | unsigned long offset, | ||
730 | int direction); | ||
731 | |||
732 | The DAC address for the tuple PAGE/OFFSET are returned. The direction | ||
733 | argument is the same as for pci_{map,unmap}_single(). The same rules | ||
734 | for cpu/device access apply here as for the streaming mapping | ||
735 | interfaces. To reiterate: | ||
736 | |||
737 | The cpu may touch the buffer before pci_dac_page_to_dma. | ||
738 | The device may touch the buffer after pci_dac_page_to_dma | ||
739 | is made, but the cpu may NOT. | ||
740 | |||
741 | When the DMA transfer is complete, invoke: | ||
742 | |||
743 | void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev, | ||
744 | dma64_addr_t dma_addr, | ||
745 | size_t len, int direction); | ||
746 | |||
747 | This must be done before the CPU looks at the buffer again. | ||
748 | This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu(). | ||
749 | |||
750 | And likewise, if you wish to let the device get back at the buffer after | ||
751 | the cpu has read/written it, invoke: | ||
752 | |||
753 | void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, | ||
754 | dma64_addr_t dma_addr, | ||
755 | size_t len, int direction); | ||
756 | |||
757 | before letting the device access the DMA area again. | ||
758 | |||
759 | If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t | ||
760 | the following interfaces are provided: | ||
761 | |||
762 | struct page *pci_dac_dma_to_page(struct pci_dev *pdev, | ||
763 | dma64_addr_t dma_addr); | ||
764 | unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev, | ||
765 | dma64_addr_t dma_addr); | ||
766 | |||
767 | This is possible with the DAC interfaces purely because they are | ||
768 | not translated in any way. | ||
769 | |||
770 | Optimizing Unmap State Space Consumption | 667 | Optimizing Unmap State Space Consumption |
771 | 668 | ||
772 | On many platforms, pci_unmap_{single,page}() is simply a nop. | 669 | On many platforms, pci_unmap_{single,page}() is simply a nop. |