aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/DMA-mapping.txt103
1 files changed, 0 insertions, 103 deletions
diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt
index 028614cdd062..e07f2530326b 100644
--- a/Documentation/DMA-mapping.txt
+++ b/Documentation/DMA-mapping.txt
@@ -664,109 +664,6 @@ It is that simple.
664Well, not for some odd devices. See the next section for information 664Well, not for some odd devices. See the next section for information
665about that. 665about that.
666 666
667 DAC Addressing for Address Space Hungry Devices
668
669There exists a class of devices which do not mesh well with the PCI
670DMA mapping API. By definition these "mappings" are a finite
671resource. The number of total available mappings per bus is platform
672specific, but there will always be a reasonable amount.
673
674What is "reasonable"? Reasonable means that networking and block I/O
675devices need not worry about using too many mappings.
676
677As an example of a problematic device, consider compute cluster cards.
678They can potentially need to access gigabytes of memory at once via
679DMA. Dynamic mappings are unsuitable for this kind of access pattern.
680
681To this end we've provided a small API by which a device driver
682may use DAC cycles to directly address all of physical memory.
683Not all platforms support this, but most do. It is easy to determine
684whether the platform will work properly at probe time.
685
686First, understand that there may be a SEVERE performance penalty for
687using these interfaces on some platforms. Therefore, you MUST only
688use these interfaces if it is absolutely required. %99 of devices can
689use the normal APIs without any problems.
690
691Note that for streaming type mappings you must either use these
692interfaces, or the dynamic mapping interfaces above. You may not mix
693usage of both for the same device. Such an act is illegal and is
694guaranteed to put a banana in your tailpipe.
695
696However, consistent mappings may in fact be used in conjunction with
697these interfaces. Remember that, as defined, consistent mappings are
698always going to be SAC addressable.
699
700The first thing your driver needs to do is query the PCI platform
701layer if it is capable of handling your devices DAC addressing
702capabilities:
703
704 int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask);
705
706You may not use the following interfaces if this routine fails.
707
708Next, DMA addresses using this API are kept track of using the
709dma64_addr_t type. It is guaranteed to be big enough to hold any
710DAC address the platform layer will give to you from the following
711routines. If you have consistent mappings as well, you still
712use plain dma_addr_t to keep track of those.
713
714All mappings obtained here will be direct. The mappings are not
715translated, and this is the purpose of this dialect of the DMA API.
716
717All routines work with page/offset pairs. This is the _ONLY_ way to
718portably refer to any piece of memory. If you have a cpu pointer
719(which may be validly DMA'd too) you may easily obtain the page
720and offset using something like this:
721
722 struct page *page = virt_to_page(ptr);
723 unsigned long offset = offset_in_page(ptr);
724
725Here are the interfaces:
726
727 dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev,
728 struct page *page,
729 unsigned long offset,
730 int direction);
731
732The DAC address for the tuple PAGE/OFFSET are returned. The direction
733argument is the same as for pci_{map,unmap}_single(). The same rules
734for cpu/device access apply here as for the streaming mapping
735interfaces. To reiterate:
736
737 The cpu may touch the buffer before pci_dac_page_to_dma.
738 The device may touch the buffer after pci_dac_page_to_dma
739 is made, but the cpu may NOT.
740
741When the DMA transfer is complete, invoke:
742
743 void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev,
744 dma64_addr_t dma_addr,
745 size_t len, int direction);
746
747This must be done before the CPU looks at the buffer again.
748This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu().
749
750And likewise, if you wish to let the device get back at the buffer after
751the cpu has read/written it, invoke:
752
753 void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev,
754 dma64_addr_t dma_addr,
755 size_t len, int direction);
756
757before letting the device access the DMA area again.
758
759If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t
760the following interfaces are provided:
761
762 struct page *pci_dac_dma_to_page(struct pci_dev *pdev,
763 dma64_addr_t dma_addr);
764 unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev,
765 dma64_addr_t dma_addr);
766
767This is possible with the DAC interfaces purely because they are
768not translated in any way.
769
770 Optimizing Unmap State Space Consumption 667 Optimizing Unmap State Space Consumption
771 668
772On many platforms, pci_unmap_{single,page}() is simply a nop. 669On many platforms, pci_unmap_{single,page}() is simply a nop.