aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-03-10 14:54:48 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2019-03-10 14:54:48 -0400
commitb7a7d1c1ec688104fdc922568c26395a756f616d (patch)
tree8059a401de91431615737b088f3da20d4567683e
parent065b6c4c913d3986c2118acb7e7fb364ce4fb625 (diff)
parent9eb9e96e97b3381e94cba81d93f4a390c26ca6cb (diff)
Merge tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping
Pull DMA mapping updates from Christoph Hellwig: - add debugfs support for dumping dma-debug information (Corentin Labbe) - Kconfig cleanups (Andy Shevchenko and me) - debugfs cleanups (Greg Kroah-Hartman) - improve dma_map_resource and use it in the media code - arch_setup_dma_ops / arch_teardown_dma_ops cleanups - various small cleanups and improvements for the per-device coherent allocator - make the DMA mask an upper bound and don't fail "too large" dma mask in the remaning two architectures - this will allow big driver cleanups in the following merge windows * tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping: (21 commits) Documentation/DMA-API-HOWTO: update dma_mask sections sparc64/pci_sun4v: allow large DMA masks sparc64/iommu: allow large DMA masks sparc64: refactor the ali DMA quirk ccio: allow large DMA masks dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag dma-mapping: remove dma_mark_declared_memory_occupied dma-mapping: move CONFIG_DMA_CMA to kernel/dma/Kconfig dma-mapping: improve selection of dma_declare_coherent availability dma-mapping: remove an incorrect __iommem annotation of: select OF_RESERVED_MEM automatically device.h: dma_mem is only needed for HAVE_GENERIC_DMA_COHERENT mfd/sm501: depend on HAS_DMA dma-mapping: add a kconfig symbol for arch_teardown_dma_ops availability dma-mapping: add a kconfig symbol for arch_setup_dma_ops availability dma-mapping: move debug configuration options to kernel/dma dma-debug: add dumping facility via debugfs dma: debug: no need to check return value of debugfs_create functions videobuf2: replace a layering violation with dma_map_resource dma-mapping: don't BUG when calling dma_map_resource on RAM ...
-rw-r--r--Documentation/DMA-API-HOWTO.txt121
-rw-r--r--Documentation/DMA-API.txt29
-rw-r--r--arch/arc/Kconfig3
-rw-r--r--arch/arc/include/asm/Kbuild1
-rw-r--r--arch/arc/include/asm/dma-mapping.h13
-rw-r--r--arch/arm/Kconfig5
-rw-r--r--arch/arm/include/asm/dma-mapping.h9
-rw-r--r--arch/arm/mach-imx/mach-imx27_visstrim_m10.c12
-rw-r--r--arch/arm/mach-imx/mach-mx31moboard.c3
-rw-r--r--arch/arm/mm/dma-mapping.c2
-rw-r--r--arch/arm64/Kconfig4
-rw-r--r--arch/arm64/include/asm/dma-mapping.h9
-rw-r--r--arch/csky/Kconfig2
-rw-r--r--arch/mips/Kconfig2
-rw-r--r--arch/mips/include/asm/dma-mapping.h10
-rw-r--r--arch/mips/mm/dma-noncoherent.c8
-rw-r--r--arch/powerpc/Kconfig1
-rw-r--r--arch/riscv/Kconfig1
-rw-r--r--arch/sh/Kconfig2
-rw-r--r--arch/sh/boards/mach-ap325rxa/setup.c5
-rw-r--r--arch/sh/boards/mach-ecovec24/setup.c6
-rw-r--r--arch/sh/boards/mach-kfr2r09/setup.c5
-rw-r--r--arch/sh/boards/mach-migor/setup.c5
-rw-r--r--arch/sh/boards/mach-se/7724/setup.c6
-rw-r--r--arch/sh/drivers/pci/fixups-dreamcast.c3
-rw-r--r--arch/sparc/kernel/iommu.c13
-rw-r--r--arch/sparc/kernel/kernel.h6
-rw-r--r--arch/sparc/kernel/pci.c46
-rw-r--r--arch/sparc/kernel/pci_sun4v.c18
-rw-r--r--arch/unicore32/Kconfig1
-rw-r--r--arch/x86/Kconfig1
-rw-r--r--arch/xtensa/Kconfig1
-rw-r--r--drivers/base/Kconfig77
-rw-r--r--drivers/media/common/videobuf2/videobuf2-dma-contig.c41
-rw-r--r--drivers/mfd/Kconfig3
-rw-r--r--drivers/of/Kconfig6
-rw-r--r--drivers/parisc/ccio-dma.c4
-rw-r--r--drivers/usb/host/ohci-sm501.c3
-rw-r--r--drivers/usb/host/ohci-tmio.c2
-rw-r--r--include/linux/device.h2
-rw-r--r--include/linux/dma-mapping.h61
-rw-r--r--kernel/dma/Kconfig121
-rw-r--r--kernel/dma/Makefile2
-rw-r--r--kernel/dma/coherent.c50
-rw-r--r--kernel/dma/debug.c108
-rw-r--r--kernel/dma/direct.c14
-rw-r--r--lib/Kconfig.debug36
47 files changed, 341 insertions, 542 deletions
diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
index f0cc3f772265..1a721d0f35c8 100644
--- a/Documentation/DMA-API-HOWTO.txt
+++ b/Documentation/DMA-API-HOWTO.txt
@@ -146,114 +146,75 @@ What about block I/O and networking buffers? The block I/O and
146networking subsystems make sure that the buffers they use are valid 146networking subsystems make sure that the buffers they use are valid
147for you to DMA from/to. 147for you to DMA from/to.
148 148
149DMA addressing limitations 149DMA addressing capabilities
150========================== 150==========================
151 151
152Does your device have any DMA addressing limitations? For example, is 152By default, the kernel assumes that your device can address 32-bits of DMA
153your device only capable of driving the low order 24-bits of address? 153addressing. For a 64-bit capable device, this needs to be increased, and for
154If so, you need to inform the kernel of this fact. 154a device with limitations, it needs to be decreased.
155 155
156By default, the kernel assumes that your device can address the full 156Special note about PCI: PCI-X specification requires PCI-X devices to support
15732-bits. For a 64-bit capable device, this needs to be increased. 15764-bit addressing (DAC) for all transactions. And at least one platform (SGI
158And for a device with limitations, as discussed in the previous 158SN2) requires 64-bit consistent allocations to operate correctly when the IO
159paragraph, it needs to be decreased. 159bus is in PCI-X mode.
160 160
161Special note about PCI: PCI-X specification requires PCI-X devices to 161For correct operation, you must set the DMA mask to inform the kernel about
162support 64-bit addressing (DAC) for all transactions. And at least 162your devices DMA addressing capabilities.
163one platform (SGI SN2) requires 64-bit consistent allocations to
164operate correctly when the IO bus is in PCI-X mode.
165 163
166For correct operation, you must interrogate the kernel in your device 164This is performed via a call to dma_set_mask_and_coherent()::
167probe routine to see if the DMA controller on the machine can properly
168support the DMA addressing limitation your device has. It is good
169style to do this even if your device holds the default setting,
170because this shows that you did think about these issues wrt. your
171device.
172
173The query is performed via a call to dma_set_mask_and_coherent()::
174 165
175 int dma_set_mask_and_coherent(struct device *dev, u64 mask); 166 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
176 167
177which will query the mask for both streaming and coherent APIs together. 168which will set the mask for both streaming and coherent APIs together. If you
178If you have some special requirements, then the following two separate 169have some special requirements, then the following two separate calls can be
179queries can be used instead: 170used instead:
180 171
181 The query for streaming mappings is performed via a call to 172 The setup for streaming mappings is performed via a call to
182 dma_set_mask():: 173 dma_set_mask()::
183 174
184 int dma_set_mask(struct device *dev, u64 mask); 175 int dma_set_mask(struct device *dev, u64 mask);
185 176
186 The query for consistent allocations is performed via a call 177 The setup for consistent allocations is performed via a call
187 to dma_set_coherent_mask():: 178 to dma_set_coherent_mask()::
188 179
189 int dma_set_coherent_mask(struct device *dev, u64 mask); 180 int dma_set_coherent_mask(struct device *dev, u64 mask);
190 181
191Here, dev is a pointer to the device struct of your device, and mask 182Here, dev is a pointer to the device struct of your device, and mask is a bit
192is a bit mask describing which bits of an address your device 183mask describing which bits of an address your device supports. Often the
193supports. It returns zero if your card can perform DMA properly on 184device struct of your device is embedded in the bus-specific device struct of
194the machine given the address mask you provided. In general, the 185your device. For example, &pdev->dev is a pointer to the device struct of a
195device struct of your device is embedded in the bus-specific device 186PCI device (pdev is a pointer to the PCI device struct of your device).
196struct of your device. For example, &pdev->dev is a pointer to the
197device struct of a PCI device (pdev is a pointer to the PCI device
198struct of your device).
199 187
200If it returns non-zero, your device cannot perform DMA properly on 188These calls usually return zero to indicated your device can perform DMA
201this platform, and attempting to do so will result in undefined 189properly on the machine given the address mask you provided, but they might
202behavior. You must either use a different mask, or not use DMA. 190return an error if the mask is too small to be supportable on the given
191system. If it returns non-zero, your device cannot perform DMA properly on
192this platform, and attempting to do so will result in undefined behavior.
193You must not use DMA on this device unless the dma_set_mask family of
194functions has returned success.
203 195
204This means that in the failure case, you have three options: 196This means that in the failure case, you have two options:
205 197
2061) Use another DMA mask, if possible (see below). 1981) Use some non-DMA mode for data transfer, if possible.
2072) Use some non-DMA mode for data transfer, if possible. 1992) Ignore this device and do not initialize it.
2083) Ignore this device and do not initialize it.
209 200
210It is recommended that your driver print a kernel KERN_WARNING message 201It is recommended that your driver print a kernel KERN_WARNING message when
211when you end up performing either #2 or #3. In this manner, if a user 202setting the DMA mask fails. In this manner, if a user of your driver reports
212of your driver reports that performance is bad or that the device is not 203that performance is bad or that the device is not even detected, you can ask
213even detected, you can ask them for the kernel messages to find out 204them for the kernel messages to find out exactly why.
214exactly why.
215 205
216The standard 32-bit addressing device would do something like this:: 206The standard 64-bit addressing device would do something like this::
217 207
218 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 208 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
219 dev_warn(dev, "mydev: No suitable DMA available\n"); 209 dev_warn(dev, "mydev: No suitable DMA available\n");
220 goto ignore_this_device; 210 goto ignore_this_device;
221 } 211 }
222 212
223Another common scenario is a 64-bit capable device. The approach here 213If the device only supports 32-bit addressing for descriptors in the
224is to try for 64-bit addressing, but back down to a 32-bit mask that 214coherent allocations, but supports full 64-bits for streaming mappings
225should not fail. The kernel may fail the 64-bit mask not because the 215it would look like this:
226platform is not capable of 64-bit addressing. Rather, it may fail in
227this case simply because 32-bit addressing is done more efficiently
228than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
229more efficient than DAC addressing.
230
231Here is how you would handle a 64-bit capable device which can drive
232all 64-bits when accessing streaming DMA::
233
234 int using_dac;
235 216
236 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { 217 if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
237 using_dac = 1;
238 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
239 using_dac = 0;
240 } else {
241 dev_warn(dev, "mydev: No suitable DMA available\n");
242 goto ignore_this_device;
243 }
244
245If a card is capable of using 64-bit consistent allocations as well,
246the case would look like this::
247
248 int using_dac, consistent_using_dac;
249
250 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
251 using_dac = 1;
252 consistent_using_dac = 1;
253 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
254 using_dac = 0;
255 consistent_using_dac = 0;
256 } else {
257 dev_warn(dev, "mydev: No suitable DMA available\n"); 218 dev_warn(dev, "mydev: No suitable DMA available\n");
258 goto ignore_this_device; 219 goto ignore_this_device;
259 } 220 }
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 607c1e75e5aa..586643642910 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -566,8 +566,7 @@ boundaries when doing this.
566 566
567 int 567 int
568 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 568 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
569 dma_addr_t device_addr, size_t size, int 569 dma_addr_t device_addr, size_t size);
570 flags)
571 570
572Declare region of memory to be handed out by dma_alloc_coherent() when 571Declare region of memory to be handed out by dma_alloc_coherent() when
573it's asked for coherent memory for this device. 572it's asked for coherent memory for this device.
@@ -581,12 +580,6 @@ dma_addr_t in dma_alloc_coherent()).
581 580
582size is the size of the area (must be multiples of PAGE_SIZE). 581size is the size of the area (must be multiples of PAGE_SIZE).
583 582
584flags can be ORed together and are:
585
586- DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
587 Do not allow dma_alloc_coherent() to fall back to system memory when
588 it's out of memory in the declared region.
589
590As a simplification for the platforms, only *one* such region of 583As a simplification for the platforms, only *one* such region of
591memory may be declared per device. 584memory may be declared per device.
592 585
@@ -605,23 +598,6 @@ unconditionally having removed all the required structures. It is the
605driver's job to ensure that no parts of this memory region are 598driver's job to ensure that no parts of this memory region are
606currently in use. 599currently in use.
607 600
608::
609
610 void *
611 dma_mark_declared_memory_occupied(struct device *dev,
612 dma_addr_t device_addr, size_t size)
613
614This is used to occupy specific regions of the declared space
615(dma_alloc_coherent() will hand out the first free region it finds).
616
617device_addr is the *device* address of the region requested.
618
619size is the size (and should be a page-sized multiple).
620
621The return value will be either a pointer to the processor virtual
622address of the memory, or an error (via PTR_ERR()) if any part of the
623region is occupied.
624
625Part III - Debug drivers use of the DMA-API 601Part III - Debug drivers use of the DMA-API
626------------------------------------------- 602-------------------------------------------
627 603
@@ -696,6 +672,9 @@ dma-api/disabled This read-only file contains the character 'Y'
696 happen when it runs out of memory or if it was 672 happen when it runs out of memory or if it was
697 disabled at boot time 673 disabled at boot time
698 674
675dma-api/dump This read-only file contains current DMA
676 mappings.
677
699dma-api/error_count This file is read-only and shows the total 678dma-api/error_count This file is read-only and shows the total
700 numbers of errors found. 679 numbers of errors found.
701 680
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 2061b652d9c3..df55672c59e6 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -11,6 +11,7 @@ config ARC
11 select ARC_TIMERS 11 select ARC_TIMERS
12 select ARCH_HAS_DMA_COHERENT_TO_PFN 12 select ARCH_HAS_DMA_COHERENT_TO_PFN
13 select ARCH_HAS_PTE_SPECIAL 13 select ARCH_HAS_PTE_SPECIAL
14 select ARCH_HAS_SETUP_DMA_OPS
14 select ARCH_HAS_SYNC_DMA_FOR_CPU 15 select ARCH_HAS_SYNC_DMA_FOR_CPU
15 select ARCH_HAS_SYNC_DMA_FOR_DEVICE 16 select ARCH_HAS_SYNC_DMA_FOR_DEVICE
16 select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC 17 select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
@@ -31,7 +32,6 @@ config ARC
31 select HAVE_ARCH_TRACEHOOK 32 select HAVE_ARCH_TRACEHOOK
32 select HAVE_DEBUG_STACKOVERFLOW 33 select HAVE_DEBUG_STACKOVERFLOW
33 select HAVE_FUTEX_CMPXCHG if FUTEX 34 select HAVE_FUTEX_CMPXCHG if FUTEX
34 select HAVE_GENERIC_DMA_COHERENT
35 select HAVE_IOREMAP_PROT 35 select HAVE_IOREMAP_PROT
36 select HAVE_KERNEL_GZIP 36 select HAVE_KERNEL_GZIP
37 select HAVE_KERNEL_LZMA 37 select HAVE_KERNEL_LZMA
@@ -45,7 +45,6 @@ config ARC
45 select MODULES_USE_ELF_RELA 45 select MODULES_USE_ELF_RELA
46 select OF 46 select OF
47 select OF_EARLY_FLATTREE 47 select OF_EARLY_FLATTREE
48 select OF_RESERVED_MEM
49 select PCI_SYSCALL if PCI 48 select PCI_SYSCALL if PCI
50 select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING 49 select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING
51 50
diff --git a/arch/arc/include/asm/Kbuild b/arch/arc/include/asm/Kbuild
index caa270261521..b41f8881ecc8 100644
--- a/arch/arc/include/asm/Kbuild
+++ b/arch/arc/include/asm/Kbuild
@@ -3,6 +3,7 @@ generic-y += bugs.h
3generic-y += compat.h 3generic-y += compat.h
4generic-y += device.h 4generic-y += device.h
5generic-y += div64.h 5generic-y += div64.h
6generic-y += dma-mapping.h
6generic-y += emergency-restart.h 7generic-y += emergency-restart.h
7generic-y += extable.h 8generic-y += extable.h
8generic-y += ftrace.h 9generic-y += ftrace.h
diff --git a/arch/arc/include/asm/dma-mapping.h b/arch/arc/include/asm/dma-mapping.h
deleted file mode 100644
index c946c0a83e76..000000000000
--- a/arch/arc/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,13 +0,0 @@
1// SPDX-License-Identifier: GPL-2.0
2// (C) 2018 Synopsys, Inc. (www.synopsys.com)
3
4#ifndef ASM_ARC_DMA_MAPPING_H
5#define ASM_ARC_DMA_MAPPING_H
6
7#include <asm-generic/dma-mapping.h>
8
9void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
10 const struct iommu_ops *iommu, bool coherent);
11#define arch_setup_dma_ops arch_setup_dma_ops
12
13#endif
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e8a4d3ffb6eb..5085a1eab9fc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -13,9 +13,11 @@ config ARM
13 select ARCH_HAS_MEMBARRIER_SYNC_CORE 13 select ARCH_HAS_MEMBARRIER_SYNC_CORE
14 select ARCH_HAS_PTE_SPECIAL if ARM_LPAE 14 select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
15 select ARCH_HAS_PHYS_TO_DMA 15 select ARCH_HAS_PHYS_TO_DMA
16 select ARCH_HAS_SETUP_DMA_OPS
16 select ARCH_HAS_SET_MEMORY 17 select ARCH_HAS_SET_MEMORY
17 select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL 18 select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
18 select ARCH_HAS_STRICT_MODULE_RWX if MMU 19 select ARCH_HAS_STRICT_MODULE_RWX if MMU
20 select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
19 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 21 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
20 select ARCH_HAVE_CUSTOM_GPIO_H 22 select ARCH_HAVE_CUSTOM_GPIO_H
21 select ARCH_HAS_GCOV_PROFILE_ALL 23 select ARCH_HAS_GCOV_PROFILE_ALL
@@ -31,6 +33,7 @@ config ARM
31 select CLONE_BACKWARDS 33 select CLONE_BACKWARDS
32 select CPU_PM if SUSPEND || CPU_IDLE 34 select CPU_PM if SUSPEND || CPU_IDLE
33 select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS 35 select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
36 select DMA_DECLARE_COHERENT
34 select DMA_REMAP if MMU 37 select DMA_REMAP if MMU
35 select EDAC_SUPPORT 38 select EDAC_SUPPORT
36 select EDAC_ATOMIC_SCRUB 39 select EDAC_ATOMIC_SCRUB
@@ -73,7 +76,6 @@ config ARM
73 select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL 76 select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL
74 select HAVE_FUNCTION_TRACER if !XIP_KERNEL 77 select HAVE_FUNCTION_TRACER if !XIP_KERNEL
75 select HAVE_GCC_PLUGINS 78 select HAVE_GCC_PLUGINS
76 select HAVE_GENERIC_DMA_COHERENT
77 select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7) 79 select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
78 select HAVE_IDE if PCI || ISA || PCMCIA 80 select HAVE_IDE if PCI || ISA || PCMCIA
79 select HAVE_IRQ_TIME_ACCOUNTING 81 select HAVE_IRQ_TIME_ACCOUNTING
@@ -102,7 +104,6 @@ config ARM
102 select MODULES_USE_ELF_REL 104 select MODULES_USE_ELF_REL
103 select NEED_DMA_MAP_STATE 105 select NEED_DMA_MAP_STATE
104 select OF_EARLY_FLATTREE if OF 106 select OF_EARLY_FLATTREE if OF
105 select OF_RESERVED_MEM if OF
106 select OLD_SIGACTION 107 select OLD_SIGACTION
107 select OLD_SIGSUSPEND3 108 select OLD_SIGSUSPEND3
108 select PCI_SYSCALL if PCI 109 select PCI_SYSCALL if PCI
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 31d3b96f0f4b..03ba90ffc0f8 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -96,15 +96,6 @@ static inline unsigned long dma_max_pfn(struct device *dev)
96} 96}
97#define dma_max_pfn(dev) dma_max_pfn(dev) 97#define dma_max_pfn(dev) dma_max_pfn(dev)
98 98
99#define arch_setup_dma_ops arch_setup_dma_ops
100extern void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
101 const struct iommu_ops *iommu, bool coherent);
102
103#ifdef CONFIG_MMU
104#define arch_teardown_dma_ops arch_teardown_dma_ops
105extern void arch_teardown_dma_ops(struct device *dev);
106#endif
107
108/* do not use this function in a driver */ 99/* do not use this function in a driver */
109static inline bool is_device_dma_coherent(struct device *dev) 100static inline bool is_device_dma_coherent(struct device *dev)
110{ 101{
diff --git a/arch/arm/mach-imx/mach-imx27_visstrim_m10.c b/arch/arm/mach-imx/mach-imx27_visstrim_m10.c
index 5169dfba9718..07d4fcfe5c2e 100644
--- a/arch/arm/mach-imx/mach-imx27_visstrim_m10.c
+++ b/arch/arm/mach-imx/mach-imx27_visstrim_m10.c
@@ -258,8 +258,7 @@ static void __init visstrim_analog_camera_init(void)
258 return; 258 return;
259 259
260 dma_declare_coherent_memory(&pdev->dev, mx2_camera_base, 260 dma_declare_coherent_memory(&pdev->dev, mx2_camera_base,
261 mx2_camera_base, MX2_CAMERA_BUF_SIZE, 261 mx2_camera_base, MX2_CAMERA_BUF_SIZE);
262 DMA_MEMORY_EXCLUSIVE);
263} 262}
264 263
265static void __init visstrim_reserve(void) 264static void __init visstrim_reserve(void)
@@ -445,8 +444,7 @@ static void __init visstrim_coda_init(void)
445 dma_declare_coherent_memory(&pdev->dev, 444 dma_declare_coherent_memory(&pdev->dev,
446 mx2_camera_base + MX2_CAMERA_BUF_SIZE, 445 mx2_camera_base + MX2_CAMERA_BUF_SIZE,
447 mx2_camera_base + MX2_CAMERA_BUF_SIZE, 446 mx2_camera_base + MX2_CAMERA_BUF_SIZE,
448 MX2_CAMERA_BUF_SIZE, 447 MX2_CAMERA_BUF_SIZE);
449 DMA_MEMORY_EXCLUSIVE);
450} 448}
451 449
452/* DMA deinterlace */ 450/* DMA deinterlace */
@@ -465,8 +463,7 @@ static void __init visstrim_deinterlace_init(void)
465 dma_declare_coherent_memory(&pdev->dev, 463 dma_declare_coherent_memory(&pdev->dev,
466 mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 464 mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE,
467 mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, 465 mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE,
468 MX2_CAMERA_BUF_SIZE, 466 MX2_CAMERA_BUF_SIZE);
469 DMA_MEMORY_EXCLUSIVE);
470} 467}
471 468
472/* Emma-PrP for format conversion */ 469/* Emma-PrP for format conversion */
@@ -485,8 +482,7 @@ static void __init visstrim_emmaprp_init(void)
485 */ 482 */
486 ret = dma_declare_coherent_memory(&pdev->dev, 483 ret = dma_declare_coherent_memory(&pdev->dev,
487 mx2_camera_base, mx2_camera_base, 484 mx2_camera_base, mx2_camera_base,
488 MX2_CAMERA_BUF_SIZE, 485 MX2_CAMERA_BUF_SIZE);
489 DMA_MEMORY_EXCLUSIVE);
490 if (ret) 486 if (ret)
491 pr_err("Failed to declare memory for emmaprp\n"); 487 pr_err("Failed to declare memory for emmaprp\n");
492} 488}
diff --git a/arch/arm/mach-imx/mach-mx31moboard.c b/arch/arm/mach-imx/mach-mx31moboard.c
index 643a3d749703..fe50f4cf00a7 100644
--- a/arch/arm/mach-imx/mach-mx31moboard.c
+++ b/arch/arm/mach-imx/mach-mx31moboard.c
@@ -475,8 +475,7 @@ static int __init mx31moboard_init_cam(void)
475 475
476 ret = dma_declare_coherent_memory(&pdev->dev, 476 ret = dma_declare_coherent_memory(&pdev->dev,
477 mx3_camera_base, mx3_camera_base, 477 mx3_camera_base, mx3_camera_base,
478 MX3_CAMERA_BUF_SIZE, 478 MX3_CAMERA_BUF_SIZE);
479 DMA_MEMORY_EXCLUSIVE);
480 if (ret) 479 if (ret)
481 goto err; 480 goto err;
482 481
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 1e3e08a1c456..c6aab9c36ff1 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -188,6 +188,7 @@ const struct dma_map_ops arm_dma_ops = {
188 .unmap_page = arm_dma_unmap_page, 188 .unmap_page = arm_dma_unmap_page,
189 .map_sg = arm_dma_map_sg, 189 .map_sg = arm_dma_map_sg,
190 .unmap_sg = arm_dma_unmap_sg, 190 .unmap_sg = arm_dma_unmap_sg,
191 .map_resource = dma_direct_map_resource,
191 .sync_single_for_cpu = arm_dma_sync_single_for_cpu, 192 .sync_single_for_cpu = arm_dma_sync_single_for_cpu,
192 .sync_single_for_device = arm_dma_sync_single_for_device, 193 .sync_single_for_device = arm_dma_sync_single_for_device,
193 .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, 194 .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
@@ -211,6 +212,7 @@ const struct dma_map_ops arm_coherent_dma_ops = {
211 .get_sgtable = arm_dma_get_sgtable, 212 .get_sgtable = arm_dma_get_sgtable,
212 .map_page = arm_coherent_dma_map_page, 213 .map_page = arm_coherent_dma_map_page,
213 .map_sg = arm_dma_map_sg, 214 .map_sg = arm_dma_map_sg,
215 .map_resource = dma_direct_map_resource,
214 .dma_supported = arm_dma_supported, 216 .dma_supported = arm_dma_supported,
215}; 217};
216EXPORT_SYMBOL(arm_coherent_dma_ops); 218EXPORT_SYMBOL(arm_coherent_dma_ops);
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4fab69620719..117b2541ef3d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -22,12 +22,14 @@ config ARM64
22 select ARCH_HAS_KCOV 22 select ARCH_HAS_KCOV
23 select ARCH_HAS_MEMBARRIER_SYNC_CORE 23 select ARCH_HAS_MEMBARRIER_SYNC_CORE
24 select ARCH_HAS_PTE_SPECIAL 24 select ARCH_HAS_PTE_SPECIAL
25 select ARCH_HAS_SETUP_DMA_OPS
25 select ARCH_HAS_SET_MEMORY 26 select ARCH_HAS_SET_MEMORY
26 select ARCH_HAS_STRICT_KERNEL_RWX 27 select ARCH_HAS_STRICT_KERNEL_RWX
27 select ARCH_HAS_STRICT_MODULE_RWX 28 select ARCH_HAS_STRICT_MODULE_RWX
28 select ARCH_HAS_SYNC_DMA_FOR_DEVICE 29 select ARCH_HAS_SYNC_DMA_FOR_DEVICE
29 select ARCH_HAS_SYNC_DMA_FOR_CPU 30 select ARCH_HAS_SYNC_DMA_FOR_CPU
30 select ARCH_HAS_SYSCALL_WRAPPER 31 select ARCH_HAS_SYSCALL_WRAPPER
32 select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
31 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 33 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
32 select ARCH_HAVE_NMI_SAFE_CMPXCHG 34 select ARCH_HAVE_NMI_SAFE_CMPXCHG
33 select ARCH_INLINE_READ_LOCK if !PREEMPT 35 select ARCH_INLINE_READ_LOCK if !PREEMPT
@@ -137,7 +139,6 @@ config ARM64
137 select HAVE_FUNCTION_TRACER 139 select HAVE_FUNCTION_TRACER
138 select HAVE_FUNCTION_GRAPH_TRACER 140 select HAVE_FUNCTION_GRAPH_TRACER
139 select HAVE_GCC_PLUGINS 141 select HAVE_GCC_PLUGINS
140 select HAVE_GENERIC_DMA_COHERENT
141 select HAVE_HW_BREAKPOINT if PERF_EVENTS 142 select HAVE_HW_BREAKPOINT if PERF_EVENTS
142 select HAVE_IRQ_TIME_ACCOUNTING 143 select HAVE_IRQ_TIME_ACCOUNTING
143 select HAVE_MEMBLOCK_NODE_MAP if NUMA 144 select HAVE_MEMBLOCK_NODE_MAP if NUMA
@@ -163,7 +164,6 @@ config ARM64
163 select NEED_SG_DMA_LENGTH 164 select NEED_SG_DMA_LENGTH
164 select OF 165 select OF
165 select OF_EARLY_FLATTREE 166 select OF_EARLY_FLATTREE
166 select OF_RESERVED_MEM
167 select PCI_DOMAINS_GENERIC if PCI 167 select PCI_DOMAINS_GENERIC if PCI
168 select PCI_ECAM if (ACPI && PCI) 168 select PCI_ECAM if (ACPI && PCI)
169 select PCI_SYSCALL if PCI 169 select PCI_SYSCALL if PCI
diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
index 95dbf3ef735a..de98191e4c7d 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -29,15 +29,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
29 return NULL; 29 return NULL;
30} 30}
31 31
32void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
33 const struct iommu_ops *iommu, bool coherent);
34#define arch_setup_dma_ops arch_setup_dma_ops
35
36#ifdef CONFIG_IOMMU_DMA
37void arch_teardown_dma_ops(struct device *dev);
38#define arch_teardown_dma_ops arch_teardown_dma_ops
39#endif
40
41/* 32/*
42 * Do not use this function in a driver, it is only provided for 33 * Do not use this function in a driver, it is only provided for
43 * arch/arm/mm/xen.c, which is used by arm64 as well. 34 * arch/arm/mm/xen.c, which is used by arm64 as well.
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 6959e0b1e956..725a115759c9 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -31,7 +31,6 @@ config CSKY
31 select HAVE_ARCH_TRACEHOOK 31 select HAVE_ARCH_TRACEHOOK
32 select HAVE_FUNCTION_TRACER 32 select HAVE_FUNCTION_TRACER
33 select HAVE_FUNCTION_GRAPH_TRACER 33 select HAVE_FUNCTION_GRAPH_TRACER
34 select HAVE_GENERIC_DMA_COHERENT
35 select HAVE_KERNEL_GZIP 34 select HAVE_KERNEL_GZIP
36 select HAVE_KERNEL_LZO 35 select HAVE_KERNEL_LZO
37 select HAVE_KERNEL_LZMA 36 select HAVE_KERNEL_LZMA
@@ -43,7 +42,6 @@ config CSKY
43 select MODULES_USE_ELF_RELA if MODULES 42 select MODULES_USE_ELF_RELA if MODULES
44 select OF 43 select OF
45 select OF_EARLY_FLATTREE 44 select OF_EARLY_FLATTREE
46 select OF_RESERVED_MEM
47 select PERF_USE_VMALLOC if CPU_CK610 45 select PERF_USE_VMALLOC if CPU_CK610
48 select RTC_LIB 46 select RTC_LIB
49 select TIMER_OF 47 select TIMER_OF
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 3d7f1153155f..4a5f5b0ee9a9 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -57,7 +57,6 @@ config MIPS
57 select HAVE_FTRACE_MCOUNT_RECORD 57 select HAVE_FTRACE_MCOUNT_RECORD
58 select HAVE_FUNCTION_GRAPH_TRACER 58 select HAVE_FUNCTION_GRAPH_TRACER
59 select HAVE_FUNCTION_TRACER 59 select HAVE_FUNCTION_TRACER
60 select HAVE_GENERIC_DMA_COHERENT
61 select HAVE_IDE 60 select HAVE_IDE
62 select HAVE_IOREMAP_PROT 61 select HAVE_IOREMAP_PROT
63 select HAVE_IRQ_EXIT_ON_IRQ_STACK 62 select HAVE_IRQ_EXIT_ON_IRQ_STACK
@@ -1119,6 +1118,7 @@ config DMA_MAYBE_COHERENT
1119 1118
1120config DMA_PERDEV_COHERENT 1119config DMA_PERDEV_COHERENT
1121 bool 1120 bool
1121 select ARCH_HAS_SETUP_DMA_OPS
1122 select DMA_NONCOHERENT 1122 select DMA_NONCOHERENT
1123 1123
1124config DMA_NONCOHERENT 1124config DMA_NONCOHERENT
diff --git a/arch/mips/include/asm/dma-mapping.h b/arch/mips/include/asm/dma-mapping.h
index 20dfaad3a55d..34de7b17b41b 100644
--- a/arch/mips/include/asm/dma-mapping.h
+++ b/arch/mips/include/asm/dma-mapping.h
@@ -15,14 +15,4 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
15#endif 15#endif
16} 16}
17 17
18#define arch_setup_dma_ops arch_setup_dma_ops
19static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
20 u64 size, const struct iommu_ops *iommu,
21 bool coherent)
22{
23#ifdef CONFIG_DMA_PERDEV_COHERENT
24 dev->dma_coherent = coherent;
25#endif
26}
27
28#endif /* _ASM_DMA_MAPPING_H */ 18#endif /* _ASM_DMA_MAPPING_H */
diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
index b57465733e87..f9549d2fbea3 100644
--- a/arch/mips/mm/dma-noncoherent.c
+++ b/arch/mips/mm/dma-noncoherent.c
@@ -156,3 +156,11 @@ void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size,
156 156
157 dma_sync_virt(vaddr, size, direction); 157 dma_sync_virt(vaddr, size, direction);
158} 158}
159
160#ifdef CONFIG_DMA_PERDEV_COHERENT
161void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
162 const struct iommu_ops *iommu, bool coherent)
163{
164 dev->dma_coherent = coherent;
165}
166#endif
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b5dce13a6132..2d0be82c3061 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -232,7 +232,6 @@ config PPC
232 select NEED_SG_DMA_LENGTH 232 select NEED_SG_DMA_LENGTH
233 select OF 233 select OF
234 select OF_EARLY_FLATTREE 234 select OF_EARLY_FLATTREE
235 select OF_RESERVED_MEM
236 select OLD_SIGACTION if PPC32 235 select OLD_SIGACTION if PPC32
237 select OLD_SIGSUSPEND 236 select OLD_SIGSUSPEND
238 select PCI_DOMAINS if PCI 237 select PCI_DOMAINS if PCI
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index b41311f6a94f..eb56c82d8aa1 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -32,7 +32,6 @@ config RISCV
32 select HAVE_MEMBLOCK_NODE_MAP 32 select HAVE_MEMBLOCK_NODE_MAP
33 select HAVE_DMA_CONTIGUOUS 33 select HAVE_DMA_CONTIGUOUS
34 select HAVE_FUTEX_CMPXCHG if FUTEX 34 select HAVE_FUTEX_CMPXCHG if FUTEX
35 select HAVE_GENERIC_DMA_COHERENT
36 select HAVE_PERF_EVENTS 35 select HAVE_PERF_EVENTS
37 select HAVE_SYSCALL_TRACEPOINTS 36 select HAVE_SYSCALL_TRACEPOINTS
38 select IRQ_DOMAIN 37 select IRQ_DOMAIN
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index d9a9144dec35..b1c91ea9a958 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -7,11 +7,11 @@ config SUPERH
7 select ARCH_NO_COHERENT_DMA_MMAP if !MMU 7 select ARCH_NO_COHERENT_DMA_MMAP if !MMU
8 select HAVE_PATA_PLATFORM 8 select HAVE_PATA_PLATFORM
9 select CLKDEV_LOOKUP 9 select CLKDEV_LOOKUP
10 select DMA_DECLARE_COHERENT
10 select HAVE_IDE if HAS_IOPORT_MAP 11 select HAVE_IDE if HAS_IOPORT_MAP
11 select HAVE_MEMBLOCK_NODE_MAP 12 select HAVE_MEMBLOCK_NODE_MAP
12 select ARCH_DISCARD_MEMBLOCK 13 select ARCH_DISCARD_MEMBLOCK
13 select HAVE_OPROFILE 14 select HAVE_OPROFILE
14 select HAVE_GENERIC_DMA_COHERENT
15 select HAVE_ARCH_TRACEHOOK 15 select HAVE_ARCH_TRACEHOOK
16 select HAVE_PERF_EVENTS 16 select HAVE_PERF_EVENTS
17 select HAVE_DEBUG_BUGVERBOSE 17 select HAVE_DEBUG_BUGVERBOSE
diff --git a/arch/sh/boards/mach-ap325rxa/setup.c b/arch/sh/boards/mach-ap325rxa/setup.c
index 8f234d0435aa..7899b4f51fdd 100644
--- a/arch/sh/boards/mach-ap325rxa/setup.c
+++ b/arch/sh/boards/mach-ap325rxa/setup.c
@@ -529,9 +529,8 @@ static int __init ap325rxa_devices_setup(void)
529 device_initialize(&ap325rxa_ceu_device.dev); 529 device_initialize(&ap325rxa_ceu_device.dev);
530 arch_setup_pdev_archdata(&ap325rxa_ceu_device); 530 arch_setup_pdev_archdata(&ap325rxa_ceu_device);
531 dma_declare_coherent_memory(&ap325rxa_ceu_device.dev, 531 dma_declare_coherent_memory(&ap325rxa_ceu_device.dev,
532 ceu_dma_membase, ceu_dma_membase, 532 ceu_dma_membase, ceu_dma_membase,
533 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, 533 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
534 DMA_MEMORY_EXCLUSIVE);
535 534
536 platform_device_add(&ap325rxa_ceu_device); 535 platform_device_add(&ap325rxa_ceu_device);
537 536
diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
index 5495efa07335..3c59019c4695 100644
--- a/arch/sh/boards/mach-ecovec24/setup.c
+++ b/arch/sh/boards/mach-ecovec24/setup.c
@@ -1438,8 +1438,7 @@ static int __init arch_setup(void)
1438 dma_declare_coherent_memory(&ecovec_ceu_devices[0]->dev, 1438 dma_declare_coherent_memory(&ecovec_ceu_devices[0]->dev,
1439 ceu0_dma_membase, ceu0_dma_membase, 1439 ceu0_dma_membase, ceu0_dma_membase,
1440 ceu0_dma_membase + 1440 ceu0_dma_membase +
1441 CEU_BUFFER_MEMORY_SIZE - 1, 1441 CEU_BUFFER_MEMORY_SIZE - 1);
1442 DMA_MEMORY_EXCLUSIVE);
1443 platform_device_add(ecovec_ceu_devices[0]); 1442 platform_device_add(ecovec_ceu_devices[0]);
1444 1443
1445 device_initialize(&ecovec_ceu_devices[1]->dev); 1444 device_initialize(&ecovec_ceu_devices[1]->dev);
@@ -1447,8 +1446,7 @@ static int __init arch_setup(void)
1447 dma_declare_coherent_memory(&ecovec_ceu_devices[1]->dev, 1446 dma_declare_coherent_memory(&ecovec_ceu_devices[1]->dev,
1448 ceu1_dma_membase, ceu1_dma_membase, 1447 ceu1_dma_membase, ceu1_dma_membase,
1449 ceu1_dma_membase + 1448 ceu1_dma_membase +
1450 CEU_BUFFER_MEMORY_SIZE - 1, 1449 CEU_BUFFER_MEMORY_SIZE - 1);
1451 DMA_MEMORY_EXCLUSIVE);
1452 platform_device_add(ecovec_ceu_devices[1]); 1450 platform_device_add(ecovec_ceu_devices[1]);
1453 1451
1454 gpiod_add_lookup_table(&cn12_power_gpiod_table); 1452 gpiod_add_lookup_table(&cn12_power_gpiod_table);
diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c
index 203d249a0a2b..b8bf67c86eab 100644
--- a/arch/sh/boards/mach-kfr2r09/setup.c
+++ b/arch/sh/boards/mach-kfr2r09/setup.c
@@ -603,9 +603,8 @@ static int __init kfr2r09_devices_setup(void)
603 device_initialize(&kfr2r09_ceu_device.dev); 603 device_initialize(&kfr2r09_ceu_device.dev);
604 arch_setup_pdev_archdata(&kfr2r09_ceu_device); 604 arch_setup_pdev_archdata(&kfr2r09_ceu_device);
605 dma_declare_coherent_memory(&kfr2r09_ceu_device.dev, 605 dma_declare_coherent_memory(&kfr2r09_ceu_device.dev,
606 ceu_dma_membase, ceu_dma_membase, 606 ceu_dma_membase, ceu_dma_membase,
607 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, 607 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
608 DMA_MEMORY_EXCLUSIVE);
609 608
610 platform_device_add(&kfr2r09_ceu_device); 609 platform_device_add(&kfr2r09_ceu_device);
611 610
diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
index 1d2993bdd231..03579faa4ce3 100644
--- a/arch/sh/boards/mach-migor/setup.c
+++ b/arch/sh/boards/mach-migor/setup.c
@@ -604,9 +604,8 @@ static int __init migor_devices_setup(void)
604 device_initialize(&migor_ceu_device.dev); 604 device_initialize(&migor_ceu_device.dev);
605 arch_setup_pdev_archdata(&migor_ceu_device); 605 arch_setup_pdev_archdata(&migor_ceu_device);
606 dma_declare_coherent_memory(&migor_ceu_device.dev, 606 dma_declare_coherent_memory(&migor_ceu_device.dev,
607 ceu_dma_membase, ceu_dma_membase, 607 ceu_dma_membase, ceu_dma_membase,
608 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, 608 ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
609 DMA_MEMORY_EXCLUSIVE);
610 609
611 platform_device_add(&migor_ceu_device); 610 platform_device_add(&migor_ceu_device);
612 611
diff --git a/arch/sh/boards/mach-se/7724/setup.c b/arch/sh/boards/mach-se/7724/setup.c
index fdbec22ae687..13c2d3ce78f4 100644
--- a/arch/sh/boards/mach-se/7724/setup.c
+++ b/arch/sh/boards/mach-se/7724/setup.c
@@ -941,8 +941,7 @@ static int __init devices_setup(void)
941 dma_declare_coherent_memory(&ms7724se_ceu_devices[0]->dev, 941 dma_declare_coherent_memory(&ms7724se_ceu_devices[0]->dev,
942 ceu0_dma_membase, ceu0_dma_membase, 942 ceu0_dma_membase, ceu0_dma_membase,
943 ceu0_dma_membase + 943 ceu0_dma_membase +
944 CEU_BUFFER_MEMORY_SIZE - 1, 944 CEU_BUFFER_MEMORY_SIZE - 1);
945 DMA_MEMORY_EXCLUSIVE);
946 platform_device_add(ms7724se_ceu_devices[0]); 945 platform_device_add(ms7724se_ceu_devices[0]);
947 946
948 device_initialize(&ms7724se_ceu_devices[1]->dev); 947 device_initialize(&ms7724se_ceu_devices[1]->dev);
@@ -950,8 +949,7 @@ static int __init devices_setup(void)
950 dma_declare_coherent_memory(&ms7724se_ceu_devices[1]->dev, 949 dma_declare_coherent_memory(&ms7724se_ceu_devices[1]->dev,
951 ceu1_dma_membase, ceu1_dma_membase, 950 ceu1_dma_membase, ceu1_dma_membase,
952 ceu1_dma_membase + 951 ceu1_dma_membase +
953 CEU_BUFFER_MEMORY_SIZE - 1, 952 CEU_BUFFER_MEMORY_SIZE - 1);
954 DMA_MEMORY_EXCLUSIVE);
955 platform_device_add(ms7724se_ceu_devices[1]); 953 platform_device_add(ms7724se_ceu_devices[1]);
956 954
957 return platform_add_devices(ms7724se_devices, 955 return platform_add_devices(ms7724se_devices,
diff --git a/arch/sh/drivers/pci/fixups-dreamcast.c b/arch/sh/drivers/pci/fixups-dreamcast.c
index dfdbd05b6eb1..7be8694c0d13 100644
--- a/arch/sh/drivers/pci/fixups-dreamcast.c
+++ b/arch/sh/drivers/pci/fixups-dreamcast.c
@@ -63,8 +63,7 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
63 BUG_ON(dma_declare_coherent_memory(&dev->dev, 63 BUG_ON(dma_declare_coherent_memory(&dev->dev,
64 res.start, 64 res.start,
65 region.start, 65 region.start,
66 resource_size(&res), 66 resource_size(&res)));
67 DMA_MEMORY_EXCLUSIVE));
68 break; 67 break;
69 default: 68 default:
70 printk("PCI: Failed resource fixup\n"); 69 printk("PCI: Failed resource fixup\n");
diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c
index b1a09080e8da..4ae7388b1bff 100644
--- a/arch/sparc/kernel/iommu.c
+++ b/arch/sparc/kernel/iommu.c
@@ -745,15 +745,12 @@ static int dma_4u_supported(struct device *dev, u64 device_mask)
745{ 745{
746 struct iommu *iommu = dev->archdata.iommu; 746 struct iommu *iommu = dev->archdata.iommu;
747 747
748 if (device_mask > DMA_BIT_MASK(32)) 748 if (ali_sound_dma_hack(dev, device_mask))
749 return 0;
750 if ((device_mask & iommu->dma_addr_mask) == iommu->dma_addr_mask)
751 return 1; 749 return 1;
752#ifdef CONFIG_PCI 750
753 if (dev_is_pci(dev)) 751 if (device_mask < iommu->dma_addr_mask)
754 return pci64_dma_supported(to_pci_dev(dev), device_mask); 752 return 0;
755#endif 753 return 1;
756 return 0;
757} 754}
758 755
759static const struct dma_map_ops sun4u_dma_ops = { 756static const struct dma_map_ops sun4u_dma_ops = {
diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
index ddffd368e057..f6f498ba3198 100644
--- a/arch/sparc/kernel/kernel.h
+++ b/arch/sparc/kernel/kernel.h
@@ -45,7 +45,11 @@ void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs);
45void __irq_entry smp_kgdb_capture_client(int irq, struct pt_regs *regs); 45void __irq_entry smp_kgdb_capture_client(int irq, struct pt_regs *regs);
46 46
47/* pci.c */ 47/* pci.c */
48int pci64_dma_supported(struct pci_dev *pdev, u64 device_mask); 48#ifdef CONFIG_PCI
49int ali_sound_dma_hack(struct device *dev, u64 device_mask);
50#else
51#define ali_sound_dma_hack(dev, mask) (0)
52#endif
49 53
50/* signal32.c */ 54/* signal32.c */
51void do_sigreturn32(struct pt_regs *regs); 55void do_sigreturn32(struct pt_regs *regs);
diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c
index bcfec6a85d23..5ed43828e078 100644
--- a/arch/sparc/kernel/pci.c
+++ b/arch/sparc/kernel/pci.c
@@ -956,51 +956,35 @@ void arch_teardown_msi_irq(unsigned int irq)
956} 956}
957#endif /* !(CONFIG_PCI_MSI) */ 957#endif /* !(CONFIG_PCI_MSI) */
958 958
959static void ali_sound_dma_hack(struct pci_dev *pdev, int set_bit) 959/* ALI sound chips generate 31-bits of DMA, a special register
960 * determines what bit 31 is emitted as.
961 */
962int ali_sound_dma_hack(struct device *dev, u64 device_mask)
960{ 963{
964 struct iommu *iommu = dev->archdata.iommu;
961 struct pci_dev *ali_isa_bridge; 965 struct pci_dev *ali_isa_bridge;
962 u8 val; 966 u8 val;
963 967
964 /* ALI sound chips generate 31-bits of DMA, a special register 968 if (!dev_is_pci(dev))
965 * determines what bit 31 is emitted as. 969 return 0;
966 */ 970
971 if (to_pci_dev(dev)->vendor != PCI_VENDOR_ID_AL ||
972 to_pci_dev(dev)->device != PCI_DEVICE_ID_AL_M5451 ||
973 device_mask != 0x7fffffff)
974 return 0;
975
967 ali_isa_bridge = pci_get_device(PCI_VENDOR_ID_AL, 976 ali_isa_bridge = pci_get_device(PCI_VENDOR_ID_AL,
968 PCI_DEVICE_ID_AL_M1533, 977 PCI_DEVICE_ID_AL_M1533,
969 NULL); 978 NULL);
970 979
971 pci_read_config_byte(ali_isa_bridge, 0x7e, &val); 980 pci_read_config_byte(ali_isa_bridge, 0x7e, &val);
972 if (set_bit) 981 if (iommu->dma_addr_mask & 0x80000000)
973 val |= 0x01; 982 val |= 0x01;
974 else 983 else
975 val &= ~0x01; 984 val &= ~0x01;
976 pci_write_config_byte(ali_isa_bridge, 0x7e, val); 985 pci_write_config_byte(ali_isa_bridge, 0x7e, val);
977 pci_dev_put(ali_isa_bridge); 986 pci_dev_put(ali_isa_bridge);
978} 987 return 1;
979
980int pci64_dma_supported(struct pci_dev *pdev, u64 device_mask)
981{
982 u64 dma_addr_mask;
983
984 if (pdev == NULL) {
985 dma_addr_mask = 0xffffffff;
986 } else {
987 struct iommu *iommu = pdev->dev.archdata.iommu;
988
989 dma_addr_mask = iommu->dma_addr_mask;
990
991 if (pdev->vendor == PCI_VENDOR_ID_AL &&
992 pdev->device == PCI_DEVICE_ID_AL_M5451 &&
993 device_mask == 0x7fffffff) {
994 ali_sound_dma_hack(pdev,
995 (dma_addr_mask & 0x80000000) != 0);
996 return 1;
997 }
998 }
999
1000 if (device_mask >= (1UL << 32UL))
1001 return 0;
1002
1003 return (device_mask & dma_addr_mask) == dma_addr_mask;
1004} 988}
1005 989
1006void pci_resource_to_user(const struct pci_dev *pdev, int bar, 990void pci_resource_to_user(const struct pci_dev *pdev, int bar,
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index fa0e42b4cbfb..a8af6023c126 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -92,7 +92,7 @@ static long iommu_batch_flush(struct iommu_batch *p, u64 mask)
92 prot &= (HV_PCI_MAP_ATTR_READ | HV_PCI_MAP_ATTR_WRITE); 92 prot &= (HV_PCI_MAP_ATTR_READ | HV_PCI_MAP_ATTR_WRITE);
93 93
94 while (npages != 0) { 94 while (npages != 0) {
95 if (mask <= DMA_BIT_MASK(32)) { 95 if (mask <= DMA_BIT_MASK(32) || !pbm->iommu->atu) {
96 num = pci_sun4v_iommu_map(devhandle, 96 num = pci_sun4v_iommu_map(devhandle,
97 HV_PCI_TSBID(0, entry), 97 HV_PCI_TSBID(0, entry),
98 npages, 98 npages,
@@ -208,7 +208,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size,
208 atu = iommu->atu; 208 atu = iommu->atu;
209 209
210 mask = dev->coherent_dma_mask; 210 mask = dev->coherent_dma_mask;
211 if (mask <= DMA_BIT_MASK(32)) 211 if (mask <= DMA_BIT_MASK(32) || !atu)
212 tbl = &iommu->tbl; 212 tbl = &iommu->tbl;
213 else 213 else
214 tbl = &atu->tbl; 214 tbl = &atu->tbl;
@@ -674,18 +674,12 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
674static int dma_4v_supported(struct device *dev, u64 device_mask) 674static int dma_4v_supported(struct device *dev, u64 device_mask)
675{ 675{
676 struct iommu *iommu = dev->archdata.iommu; 676 struct iommu *iommu = dev->archdata.iommu;
677 u64 dma_addr_mask = iommu->dma_addr_mask;
678
679 if (device_mask > DMA_BIT_MASK(32)) {
680 if (iommu->atu)
681 dma_addr_mask = iommu->atu->dma_addr_mask;
682 else
683 return 0;
684 }
685 677
686 if ((device_mask & dma_addr_mask) == dma_addr_mask) 678 if (ali_sound_dma_hack(dev, device_mask))
687 return 1; 679 return 1;
688 return pci64_dma_supported(to_pci_dev(dev), device_mask); 680 if (device_mask < iommu->dma_addr_mask)
681 return 0;
682 return 1;
689} 683}
690 684
691static const struct dma_map_ops sun4v_dma_ops = { 685static const struct dma_map_ops sun4v_dma_ops = {
diff --git a/arch/unicore32/Kconfig b/arch/unicore32/Kconfig
index a7f1ae58d211..817d82608712 100644
--- a/arch/unicore32/Kconfig
+++ b/arch/unicore32/Kconfig
@@ -5,7 +5,6 @@ config UNICORE32
5 select ARCH_HAS_DEVMEM_IS_ALLOWED 5 select ARCH_HAS_DEVMEM_IS_ALLOWED
6 select ARCH_MIGHT_HAVE_PC_PARPORT 6 select ARCH_MIGHT_HAVE_PC_PARPORT
7 select ARCH_MIGHT_HAVE_PC_SERIO 7 select ARCH_MIGHT_HAVE_PC_SERIO
8 select HAVE_GENERIC_DMA_COHERENT
9 select HAVE_KERNEL_GZIP 8 select HAVE_KERNEL_GZIP
10 select HAVE_KERNEL_BZIP2 9 select HAVE_KERNEL_BZIP2
11 select GENERIC_ATOMIC64 10 select GENERIC_ATOMIC64
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 90b562a34d65..c1f9b3cf437c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -14,7 +14,6 @@ config X86_32
14 select ARCH_WANT_IPC_PARSE_VERSION 14 select ARCH_WANT_IPC_PARSE_VERSION
15 select CLKSRC_I8253 15 select CLKSRC_I8253
16 select CLONE_BACKWARDS 16 select CLONE_BACKWARDS
17 select HAVE_GENERIC_DMA_COHERENT
18 select MODULES_USE_ELF_REL 17 select MODULES_USE_ELF_REL
19 select OLD_SIGACTION 18 select OLD_SIGACTION
20 19
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index bacf87ee7975..4b9aafe766c5 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -450,7 +450,6 @@ config USE_OF
450 bool "Flattened Device Tree support" 450 bool "Flattened Device Tree support"
451 select OF 451 select OF
452 select OF_EARLY_FLATTREE 452 select OF_EARLY_FLATTREE
453 select OF_RESERVED_MEM
454 help 453 help
455 Include support for flattened device tree machine descriptions. 454 Include support for flattened device tree machine descriptions.
456 455
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 3e63a900b330..059700ea3521 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -191,83 +191,6 @@ config DMA_FENCE_TRACE
191 lockup related problems for dma-buffers shared across multiple 191 lockup related problems for dma-buffers shared across multiple
192 devices. 192 devices.
193 193
194config DMA_CMA
195 bool "DMA Contiguous Memory Allocator"
196 depends on HAVE_DMA_CONTIGUOUS && CMA
197 help
198 This enables the Contiguous Memory Allocator which allows drivers
199 to allocate big physically-contiguous blocks of memory for use with
200 hardware components that do not support I/O map nor scatter-gather.
201
202 You can disable CMA by specifying "cma=0" on the kernel's command
203 line.
204
205 For more information see <include/linux/dma-contiguous.h>.
206 If unsure, say "n".
207
208if DMA_CMA
209comment "Default contiguous memory area size:"
210
211config CMA_SIZE_MBYTES
212 int "Size in Mega Bytes"
213 depends on !CMA_SIZE_SEL_PERCENTAGE
214 default 0 if X86
215 default 16
216 help
217 Defines the size (in MiB) of the default memory area for Contiguous
218 Memory Allocator. If the size of 0 is selected, CMA is disabled by
219 default, but it can be enabled by passing cma=size[MG] to the kernel.
220
221
222config CMA_SIZE_PERCENTAGE
223 int "Percentage of total memory"
224 depends on !CMA_SIZE_SEL_MBYTES
225 default 0 if X86
226 default 10
227 help
228 Defines the size of the default memory area for Contiguous Memory
229 Allocator as a percentage of the total memory in the system.
230 If 0 percent is selected, CMA is disabled by default, but it can be
231 enabled by passing cma=size[MG] to the kernel.
232
233choice
234 prompt "Selected region size"
235 default CMA_SIZE_SEL_MBYTES
236
237config CMA_SIZE_SEL_MBYTES
238 bool "Use mega bytes value only"
239
240config CMA_SIZE_SEL_PERCENTAGE
241 bool "Use percentage value only"
242
243config CMA_SIZE_SEL_MIN
244 bool "Use lower value (minimum)"
245
246config CMA_SIZE_SEL_MAX
247 bool "Use higher value (maximum)"
248
249endchoice
250
251config CMA_ALIGNMENT
252 int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
253 range 4 12
254 default 8
255 help
256 DMA mapping framework by default aligns all buffers to the smallest
257 PAGE_SIZE order which is greater than or equal to the requested buffer
258 size. This works well for buffers up to a few hundreds kilobytes, but
259 for larger buffers it just a memory waste. With this parameter you can
260 specify the maximum PAGE_SIZE order for contiguous buffers. Larger
261 buffers will be aligned only to this specified order. The order is
262 expressed as a power of two multiplied by the PAGE_SIZE.
263
264 For example, if your system defaults to 4KiB pages, the order value
265 of 8 means that the buffers will be aligned up to 1MiB only.
266
267 If unsure, leave the default value "8".
268
269endif
270
271config GENERIC_ARCH_TOPOLOGY 194config GENERIC_ARCH_TOPOLOGY
272 bool 195 bool
273 help 196 help
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index aff0ab7bf83d..82389aead6ed 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -439,42 +439,14 @@ static void vb2_dc_put_userptr(void *buf_priv)
439 set_page_dirty_lock(pages[i]); 439 set_page_dirty_lock(pages[i]);
440 sg_free_table(sgt); 440 sg_free_table(sgt);
441 kfree(sgt); 441 kfree(sgt);
442 } else {
443 dma_unmap_resource(buf->dev, buf->dma_addr, buf->size,
444 buf->dma_dir, 0);
442 } 445 }
443 vb2_destroy_framevec(buf->vec); 446 vb2_destroy_framevec(buf->vec);
444 kfree(buf); 447 kfree(buf);
445} 448}
446 449
447/*
448 * For some kind of reserved memory there might be no struct page available,
449 * so all that can be done to support such 'pages' is to try to convert
450 * pfn to dma address or at the last resort just assume that
451 * dma address == physical address (like it has been assumed in earlier version
452 * of videobuf2-dma-contig
453 */
454
455#ifdef __arch_pfn_to_dma
456static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
457{
458 return (dma_addr_t)__arch_pfn_to_dma(dev, pfn);
459}
460#elif defined(__pfn_to_bus)
461static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
462{
463 return (dma_addr_t)__pfn_to_bus(pfn);
464}
465#elif defined(__pfn_to_phys)
466static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
467{
468 return (dma_addr_t)__pfn_to_phys(pfn);
469}
470#else
471static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
472{
473 /* really, we cannot do anything better at this point */
474 return (dma_addr_t)(pfn) << PAGE_SHIFT;
475}
476#endif
477
478static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr, 450static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
479 unsigned long size, enum dma_data_direction dma_dir) 451 unsigned long size, enum dma_data_direction dma_dir)
480{ 452{
@@ -528,7 +500,12 @@ static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
528 for (i = 1; i < n_pages; i++) 500 for (i = 1; i < n_pages; i++)
529 if (nums[i-1] + 1 != nums[i]) 501 if (nums[i-1] + 1 != nums[i])
530 goto fail_pfnvec; 502 goto fail_pfnvec;
531 buf->dma_addr = vb2_dc_pfn_to_dma(buf->dev, nums[0]); 503 buf->dma_addr = dma_map_resource(buf->dev,
504 __pfn_to_phys(nums[0]), size, buf->dma_dir, 0);
505 if (dma_mapping_error(buf->dev, buf->dma_addr)) {
506 ret = -ENOMEM;
507 goto fail_pfnvec;
508 }
532 goto out; 509 goto out;
533 } 510 }
534 511
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index ad8b1f400919..0ce2d8dfc5f1 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -1065,6 +1065,8 @@ config MFD_SI476X_CORE
1065 1065
1066config MFD_SM501 1066config MFD_SM501
1067 tristate "Silicon Motion SM501" 1067 tristate "Silicon Motion SM501"
1068 depends on HAS_DMA
1069 select DMA_DECLARE_COHERENT
1068 ---help--- 1070 ---help---
1069 This is the core driver for the Silicon Motion SM501 multimedia 1071 This is the core driver for the Silicon Motion SM501 multimedia
1070 companion chip. This device is a multifunction device which may 1072 companion chip. This device is a multifunction device which may
@@ -1674,6 +1676,7 @@ config MFD_TC6393XB
1674 select GPIOLIB 1676 select GPIOLIB
1675 select MFD_CORE 1677 select MFD_CORE
1676 select MFD_TMIO 1678 select MFD_TMIO
1679 select DMA_DECLARE_COHERENT
1677 help 1680 help
1678 Support for Toshiba Mobile IO Controller TC6393XB 1681 Support for Toshiba Mobile IO Controller TC6393XB
1679 1682
diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b..37c2ccbefecd 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -43,6 +43,7 @@ config OF_FLATTREE
43 43
44config OF_EARLY_FLATTREE 44config OF_EARLY_FLATTREE
45 bool 45 bool
46 select DMA_DECLARE_COHERENT if HAS_DMA
46 select OF_FLATTREE 47 select OF_FLATTREE
47 48
48config OF_PROMTREE 49config OF_PROMTREE
@@ -81,10 +82,9 @@ config OF_MDIO
81 OpenFirmware MDIO bus (Ethernet PHY) accessors 82 OpenFirmware MDIO bus (Ethernet PHY) accessors
82 83
83config OF_RESERVED_MEM 84config OF_RESERVED_MEM
84 depends on OF_EARLY_FLATTREE
85 bool 85 bool
86 help 86 depends on OF_EARLY_FLATTREE
87 Helpers to allow for reservation of memory regions 87 default y if DMA_DECLARE_COHERENT || DMA_CMA
88 88
89config OF_RESOLVE 89config OF_RESOLVE
90 bool 90 bool
diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
index 097b0d43d13c..8937ba70d817 100644
--- a/drivers/parisc/ccio-dma.c
+++ b/drivers/parisc/ccio-dma.c
@@ -712,8 +712,8 @@ ccio_dma_supported(struct device *dev, u64 mask)
712 return 0; 712 return 0;
713 } 713 }
714 714
715 /* only support 32-bit devices (ie PCI/GSC) */ 715 /* only support 32-bit or better devices (ie PCI/GSC) */
716 return (int)(mask == 0xffffffffUL); 716 return (int)(mask >= 0xffffffffUL);
717} 717}
718 718
719/** 719/**
diff --git a/drivers/usb/host/ohci-sm501.c b/drivers/usb/host/ohci-sm501.c
index c9233cddf9a2..c26228c25f99 100644
--- a/drivers/usb/host/ohci-sm501.c
+++ b/drivers/usb/host/ohci-sm501.c
@@ -126,8 +126,7 @@ static int ohci_hcd_sm501_drv_probe(struct platform_device *pdev)
126 126
127 retval = dma_declare_coherent_memory(dev, mem->start, 127 retval = dma_declare_coherent_memory(dev, mem->start,
128 mem->start - mem->parent->start, 128 mem->start - mem->parent->start,
129 resource_size(mem), 129 resource_size(mem));
130 DMA_MEMORY_EXCLUSIVE);
131 if (retval) { 130 if (retval) {
132 dev_err(dev, "cannot declare coherent memory\n"); 131 dev_err(dev, "cannot declare coherent memory\n");
133 goto err1; 132 goto err1;
diff --git a/drivers/usb/host/ohci-tmio.c b/drivers/usb/host/ohci-tmio.c
index a631dbb369d7..f88a0370659f 100644
--- a/drivers/usb/host/ohci-tmio.c
+++ b/drivers/usb/host/ohci-tmio.c
@@ -225,7 +225,7 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
225 } 225 }
226 226
227 ret = dma_declare_coherent_memory(&dev->dev, sram->start, sram->start, 227 ret = dma_declare_coherent_memory(&dev->dev, sram->start, sram->start,
228 resource_size(sram), DMA_MEMORY_EXCLUSIVE); 228 resource_size(sram));
229 if (ret) 229 if (ret)
230 goto err_dma_declare; 230 goto err_dma_declare;
231 231
diff --git a/include/linux/device.h b/include/linux/device.h
index f40f6064ba05..b425a7ee04ce 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -1028,8 +1028,10 @@ struct device {
1028 1028
1029 struct list_head dma_pools; /* dma pools (if dma'ble) */ 1029 struct list_head dma_pools; /* dma pools (if dma'ble) */
1030 1030
1031#ifdef CONFIG_DMA_DECLARE_COHERENT
1031 struct dma_coherent_mem *dma_mem; /* internal for coherent mem 1032 struct dma_coherent_mem *dma_mem; /* internal for coherent mem
1032 override */ 1033 override */
1034#endif
1033#ifdef CONFIG_DMA_CMA 1035#ifdef CONFIG_DMA_CMA
1034 struct cma *cma_area; /* contiguous memory area for dma 1036 struct cma *cma_area; /* contiguous memory area for dma
1035 allocations */ 1037 allocations */
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index f6ded992c183..70ad15758a70 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -153,7 +153,7 @@ static inline int is_device_dma_capable(struct device *dev)
153 return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE; 153 return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE;
154} 154}
155 155
156#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 156#ifdef CONFIG_DMA_DECLARE_COHERENT
157/* 157/*
158 * These three functions are only for dma allocator. 158 * These three functions are only for dma allocator.
159 * Don't use them in device drivers. 159 * Don't use them in device drivers.
@@ -192,7 +192,7 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
192{ 192{
193 return 0; 193 return 0;
194} 194}
195#endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ 195#endif /* CONFIG_DMA_DECLARE_COHERENT */
196 196
197static inline bool dma_is_direct(const struct dma_map_ops *ops) 197static inline bool dma_is_direct(const struct dma_map_ops *ops)
198{ 198{
@@ -208,6 +208,8 @@ dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,
208 unsigned long attrs); 208 unsigned long attrs);
209int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, 209int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
210 enum dma_data_direction dir, unsigned long attrs); 210 enum dma_data_direction dir, unsigned long attrs);
211dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
212 size_t size, enum dma_data_direction dir, unsigned long attrs);
211 213
212#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ 214#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
213 defined(CONFIG_SWIOTLB) 215 defined(CONFIG_SWIOTLB)
@@ -346,19 +348,20 @@ static inline dma_addr_t dma_map_resource(struct device *dev,
346 unsigned long attrs) 348 unsigned long attrs)
347{ 349{
348 const struct dma_map_ops *ops = get_dma_ops(dev); 350 const struct dma_map_ops *ops = get_dma_ops(dev);
349 dma_addr_t addr; 351 dma_addr_t addr = DMA_MAPPING_ERROR;
350 352
351 BUG_ON(!valid_dma_direction(dir)); 353 BUG_ON(!valid_dma_direction(dir));
352 354
353 /* Don't allow RAM to be mapped */ 355 /* Don't allow RAM to be mapped */
354 BUG_ON(pfn_valid(PHYS_PFN(phys_addr))); 356 if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr))))
357 return DMA_MAPPING_ERROR;
355 358
356 addr = phys_addr; 359 if (dma_is_direct(ops))
357 if (ops && ops->map_resource) 360 addr = dma_direct_map_resource(dev, phys_addr, size, dir, attrs);
361 else if (ops->map_resource)
358 addr = ops->map_resource(dev, phys_addr, size, dir, attrs); 362 addr = ops->map_resource(dev, phys_addr, size, dir, attrs);
359 363
360 debug_dma_map_resource(dev, phys_addr, size, dir, addr); 364 debug_dma_map_resource(dev, phys_addr, size, dir, addr);
361
362 return addr; 365 return addr;
363} 366}
364 367
@@ -369,7 +372,7 @@ static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr,
369 const struct dma_map_ops *ops = get_dma_ops(dev); 372 const struct dma_map_ops *ops = get_dma_ops(dev);
370 373
371 BUG_ON(!valid_dma_direction(dir)); 374 BUG_ON(!valid_dma_direction(dir));
372 if (ops && ops->unmap_resource) 375 if (!dma_is_direct(ops) && ops->unmap_resource)
373 ops->unmap_resource(dev, addr, size, dir, attrs); 376 ops->unmap_resource(dev, addr, size, dir, attrs);
374 debug_dma_unmap_resource(dev, addr, size, dir); 377 debug_dma_unmap_resource(dev, addr, size, dir);
375} 378}
@@ -668,15 +671,23 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
668 return dma_set_mask_and_coherent(dev, mask); 671 return dma_set_mask_and_coherent(dev, mask);
669} 672}
670 673
671#ifndef arch_setup_dma_ops 674#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
675void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
676 const struct iommu_ops *iommu, bool coherent);
677#else
672static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, 678static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
673 u64 size, const struct iommu_ops *iommu, 679 u64 size, const struct iommu_ops *iommu, bool coherent)
674 bool coherent) { } 680{
675#endif 681}
682#endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */
676 683
677#ifndef arch_teardown_dma_ops 684#ifdef CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS
678static inline void arch_teardown_dma_ops(struct device *dev) { } 685void arch_teardown_dma_ops(struct device *dev);
679#endif 686#else
687static inline void arch_teardown_dma_ops(struct device *dev)
688{
689}
690#endif /* CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS */
680 691
681static inline unsigned int dma_get_max_seg_size(struct device *dev) 692static inline unsigned int dma_get_max_seg_size(struct device *dev)
682{ 693{
@@ -725,19 +736,14 @@ static inline int dma_get_cache_alignment(void)
725 return 1; 736 return 1;
726} 737}
727 738
728/* flags for the coherent memory api */ 739#ifdef CONFIG_DMA_DECLARE_COHERENT
729#define DMA_MEMORY_EXCLUSIVE 0x01
730
731#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
732int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 740int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
733 dma_addr_t device_addr, size_t size, int flags); 741 dma_addr_t device_addr, size_t size);
734void dma_release_declared_memory(struct device *dev); 742void dma_release_declared_memory(struct device *dev);
735void *dma_mark_declared_memory_occupied(struct device *dev,
736 dma_addr_t device_addr, size_t size);
737#else 743#else
738static inline int 744static inline int
739dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 745dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
740 dma_addr_t device_addr, size_t size, int flags) 746 dma_addr_t device_addr, size_t size)
741{ 747{
742 return -ENOSYS; 748 return -ENOSYS;
743} 749}
@@ -746,14 +752,7 @@ static inline void
746dma_release_declared_memory(struct device *dev) 752dma_release_declared_memory(struct device *dev)
747{ 753{
748} 754}
749 755#endif /* CONFIG_DMA_DECLARE_COHERENT */
750static inline void *
751dma_mark_declared_memory_occupied(struct device *dev,
752 dma_addr_t device_addr, size_t size)
753{
754 return ERR_PTR(-EBUSY);
755}
756#endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
757 756
758static inline void *dmam_alloc_coherent(struct device *dev, size_t size, 757static inline void *dmam_alloc_coherent(struct device *dev, size_t size,
759 dma_addr_t *dma_handle, gfp_t gfp) 758 dma_addr_t *dma_handle, gfp_t gfp)
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 0711d18645de..a06ba3013b3b 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -19,7 +19,13 @@ config ARCH_HAS_DMA_COHERENCE_H
19config ARCH_HAS_DMA_SET_MASK 19config ARCH_HAS_DMA_SET_MASK
20 bool 20 bool
21 21
22config HAVE_GENERIC_DMA_COHERENT 22config DMA_DECLARE_COHERENT
23 bool
24
25config ARCH_HAS_SETUP_DMA_OPS
26 bool
27
28config ARCH_HAS_TEARDOWN_DMA_OPS
23 bool 29 bool
24 30
25config ARCH_HAS_SYNC_DMA_FOR_DEVICE 31config ARCH_HAS_SYNC_DMA_FOR_DEVICE
@@ -56,3 +62,116 @@ config DMA_REMAP
56config DMA_DIRECT_REMAP 62config DMA_DIRECT_REMAP
57 bool 63 bool
58 select DMA_REMAP 64 select DMA_REMAP
65
66config DMA_CMA
67 bool "DMA Contiguous Memory Allocator"
68 depends on HAVE_DMA_CONTIGUOUS && CMA
69 help
70 This enables the Contiguous Memory Allocator which allows drivers
71 to allocate big physically-contiguous blocks of memory for use with
72 hardware components that do not support I/O map nor scatter-gather.
73
74 You can disable CMA by specifying "cma=0" on the kernel's command
75 line.
76
77 For more information see <include/linux/dma-contiguous.h>.
78 If unsure, say "n".
79
80if DMA_CMA
81comment "Default contiguous memory area size:"
82
83config CMA_SIZE_MBYTES
84 int "Size in Mega Bytes"
85 depends on !CMA_SIZE_SEL_PERCENTAGE
86 default 0 if X86
87 default 16
88 help
89 Defines the size (in MiB) of the default memory area for Contiguous
90 Memory Allocator. If the size of 0 is selected, CMA is disabled by
91 default, but it can be enabled by passing cma=size[MG] to the kernel.
92
93
94config CMA_SIZE_PERCENTAGE
95 int "Percentage of total memory"
96 depends on !CMA_SIZE_SEL_MBYTES
97 default 0 if X86
98 default 10
99 help
100 Defines the size of the default memory area for Contiguous Memory
101 Allocator as a percentage of the total memory in the system.
102 If 0 percent is selected, CMA is disabled by default, but it can be
103 enabled by passing cma=size[MG] to the kernel.
104
105choice
106 prompt "Selected region size"
107 default CMA_SIZE_SEL_MBYTES
108
109config CMA_SIZE_SEL_MBYTES
110 bool "Use mega bytes value only"
111
112config CMA_SIZE_SEL_PERCENTAGE
113 bool "Use percentage value only"
114
115config CMA_SIZE_SEL_MIN
116 bool "Use lower value (minimum)"
117
118config CMA_SIZE_SEL_MAX
119 bool "Use higher value (maximum)"
120
121endchoice
122
123config CMA_ALIGNMENT
124 int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
125 range 4 12
126 default 8
127 help
128 DMA mapping framework by default aligns all buffers to the smallest
129 PAGE_SIZE order which is greater than or equal to the requested buffer
130 size. This works well for buffers up to a few hundreds kilobytes, but
131 for larger buffers it just a memory waste. With this parameter you can
132 specify the maximum PAGE_SIZE order for contiguous buffers. Larger
133 buffers will be aligned only to this specified order. The order is
134 expressed as a power of two multiplied by the PAGE_SIZE.
135
136 For example, if your system defaults to 4KiB pages, the order value
137 of 8 means that the buffers will be aligned up to 1MiB only.
138
139 If unsure, leave the default value "8".
140
141endif
142
143config DMA_API_DEBUG
144 bool "Enable debugging of DMA-API usage"
145 select NEED_DMA_MAP_STATE
146 help
147 Enable this option to debug the use of the DMA API by device drivers.
148 With this option you will be able to detect common bugs in device
149 drivers like double-freeing of DMA mappings or freeing mappings that
150 were never allocated.
151
152 This also attempts to catch cases where a page owned by DMA is
153 accessed by the cpu in a way that could cause data corruption. For
154 example, this enables cow_user_page() to check that the source page is
155 not undergoing DMA.
156
157 This option causes a performance degradation. Use only if you want to
158 debug device drivers and dma interactions.
159
160 If unsure, say N.
161
162config DMA_API_DEBUG_SG
163 bool "Debug DMA scatter-gather usage"
164 default y
165 depends on DMA_API_DEBUG
166 help
167 Perform extra checking that callers of dma_map_sg() have respected the
168 appropriate segment length/boundary limits for the given device when
169 preparing DMA scatterlists.
170
171 This is particularly likely to have been overlooked in cases where the
172 dma_map_sg() API is used for general bulk mapping of pages rather than
173 preparing literal scatter-gather descriptors, where there is a risk of
174 unexpected behaviour from DMA API implementations if the scatterlist
175 is technically out-of-spec.
176
177 If unsure, say N.
diff --git a/kernel/dma/Makefile b/kernel/dma/Makefile
index 72ff6e46aa86..d237cf3dc181 100644
--- a/kernel/dma/Makefile
+++ b/kernel/dma/Makefile
@@ -2,7 +2,7 @@
2 2
3obj-$(CONFIG_HAS_DMA) += mapping.o direct.o dummy.o 3obj-$(CONFIG_HAS_DMA) += mapping.o direct.o dummy.o
4obj-$(CONFIG_DMA_CMA) += contiguous.o 4obj-$(CONFIG_DMA_CMA) += contiguous.o
5obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += coherent.o 5obj-$(CONFIG_DMA_DECLARE_COHERENT) += coherent.o
6obj-$(CONFIG_DMA_VIRT_OPS) += virt.o 6obj-$(CONFIG_DMA_VIRT_OPS) += virt.o
7obj-$(CONFIG_DMA_API_DEBUG) += debug.o 7obj-$(CONFIG_DMA_API_DEBUG) += debug.o
8obj-$(CONFIG_SWIOTLB) += swiotlb.o 8obj-$(CONFIG_SWIOTLB) += swiotlb.o
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 66f0fb7e9a3a..29fd6590dc1e 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -14,7 +14,6 @@ struct dma_coherent_mem {
14 dma_addr_t device_base; 14 dma_addr_t device_base;
15 unsigned long pfn_base; 15 unsigned long pfn_base;
16 int size; 16 int size;
17 int flags;
18 unsigned long *bitmap; 17 unsigned long *bitmap;
19 spinlock_t spinlock; 18 spinlock_t spinlock;
20 bool use_dev_dma_pfn_offset; 19 bool use_dev_dma_pfn_offset;
@@ -38,12 +37,12 @@ static inline dma_addr_t dma_get_device_base(struct device *dev,
38 return mem->device_base; 37 return mem->device_base;
39} 38}
40 39
41static int dma_init_coherent_memory( 40static int dma_init_coherent_memory(phys_addr_t phys_addr,
42 phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags, 41 dma_addr_t device_addr, size_t size,
43 struct dma_coherent_mem **mem) 42 struct dma_coherent_mem **mem)
44{ 43{
45 struct dma_coherent_mem *dma_mem = NULL; 44 struct dma_coherent_mem *dma_mem = NULL;
46 void __iomem *mem_base = NULL; 45 void *mem_base = NULL;
47 int pages = size >> PAGE_SHIFT; 46 int pages = size >> PAGE_SHIFT;
48 int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 47 int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
49 int ret; 48 int ret;
@@ -73,7 +72,6 @@ static int dma_init_coherent_memory(
73 dma_mem->device_base = device_addr; 72 dma_mem->device_base = device_addr;
74 dma_mem->pfn_base = PFN_DOWN(phys_addr); 73 dma_mem->pfn_base = PFN_DOWN(phys_addr);
75 dma_mem->size = pages; 74 dma_mem->size = pages;
76 dma_mem->flags = flags;
77 spin_lock_init(&dma_mem->spinlock); 75 spin_lock_init(&dma_mem->spinlock);
78 76
79 *mem = dma_mem; 77 *mem = dma_mem;
@@ -110,12 +108,12 @@ static int dma_assign_coherent_memory(struct device *dev,
110} 108}
111 109
112int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 110int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
113 dma_addr_t device_addr, size_t size, int flags) 111 dma_addr_t device_addr, size_t size)
114{ 112{
115 struct dma_coherent_mem *mem; 113 struct dma_coherent_mem *mem;
116 int ret; 114 int ret;
117 115
118 ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags, &mem); 116 ret = dma_init_coherent_memory(phys_addr, device_addr, size, &mem);
119 if (ret) 117 if (ret)
120 return ret; 118 return ret;
121 119
@@ -137,29 +135,6 @@ void dma_release_declared_memory(struct device *dev)
137} 135}
138EXPORT_SYMBOL(dma_release_declared_memory); 136EXPORT_SYMBOL(dma_release_declared_memory);
139 137
140void *dma_mark_declared_memory_occupied(struct device *dev,
141 dma_addr_t device_addr, size_t size)
142{
143 struct dma_coherent_mem *mem = dev->dma_mem;
144 unsigned long flags;
145 int pos, err;
146
147 size += device_addr & ~PAGE_MASK;
148
149 if (!mem)
150 return ERR_PTR(-EINVAL);
151
152 spin_lock_irqsave(&mem->spinlock, flags);
153 pos = PFN_DOWN(device_addr - dma_get_device_base(dev, mem));
154 err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
155 spin_unlock_irqrestore(&mem->spinlock, flags);
156
157 if (err != 0)
158 return ERR_PTR(err);
159 return mem->virt_base + (pos << PAGE_SHIFT);
160}
161EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
162
163static void *__dma_alloc_from_coherent(struct dma_coherent_mem *mem, 138static void *__dma_alloc_from_coherent(struct dma_coherent_mem *mem,
164 ssize_t size, dma_addr_t *dma_handle) 139 ssize_t size, dma_addr_t *dma_handle)
165{ 140{
@@ -213,15 +188,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
213 return 0; 188 return 0;
214 189
215 *ret = __dma_alloc_from_coherent(mem, size, dma_handle); 190 *ret = __dma_alloc_from_coherent(mem, size, dma_handle);
216 if (*ret) 191 return 1;
217 return 1;
218
219 /*
220 * In the case where the allocation can not be satisfied from the
221 * per-device area, try to fall back to generic memory if the
222 * constraints allow it.
223 */
224 return mem->flags & DMA_MEMORY_EXCLUSIVE;
225} 192}
226 193
227void *dma_alloc_from_global_coherent(ssize_t size, dma_addr_t *dma_handle) 194void *dma_alloc_from_global_coherent(ssize_t size, dma_addr_t *dma_handle)
@@ -350,8 +317,7 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
350 317
351 if (!mem) { 318 if (!mem) {
352 ret = dma_init_coherent_memory(rmem->base, rmem->base, 319 ret = dma_init_coherent_memory(rmem->base, rmem->base,
353 rmem->size, 320 rmem->size, &mem);
354 DMA_MEMORY_EXCLUSIVE, &mem);
355 if (ret) { 321 if (ret) {
356 pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n", 322 pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
357 &rmem->base, (unsigned long)rmem->size / SZ_1M); 323 &rmem->base, (unsigned long)rmem->size / SZ_1M);
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index 23cf5361bcf1..45d51e8e26f6 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -134,17 +134,6 @@ static u32 nr_total_entries;
134/* number of preallocated entries requested by kernel cmdline */ 134/* number of preallocated entries requested by kernel cmdline */
135static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; 135static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES;
136 136
137/* debugfs dentry's for the stuff above */
138static struct dentry *dma_debug_dent __read_mostly;
139static struct dentry *global_disable_dent __read_mostly;
140static struct dentry *error_count_dent __read_mostly;
141static struct dentry *show_all_errors_dent __read_mostly;
142static struct dentry *show_num_errors_dent __read_mostly;
143static struct dentry *num_free_entries_dent __read_mostly;
144static struct dentry *min_free_entries_dent __read_mostly;
145static struct dentry *nr_total_entries_dent __read_mostly;
146static struct dentry *filter_dent __read_mostly;
147
148/* per-driver filter related state */ 137/* per-driver filter related state */
149 138
150#define NAME_MAX_LEN 64 139#define NAME_MAX_LEN 64
@@ -840,66 +829,46 @@ static const struct file_operations filter_fops = {
840 .llseek = default_llseek, 829 .llseek = default_llseek,
841}; 830};
842 831
843static int dma_debug_fs_init(void) 832static int dump_show(struct seq_file *seq, void *v)
844{ 833{
845 dma_debug_dent = debugfs_create_dir("dma-api", NULL); 834 int idx;
846 if (!dma_debug_dent) {
847 pr_err("can not create debugfs directory\n");
848 return -ENOMEM;
849 }
850 835
851 global_disable_dent = debugfs_create_bool("disabled", 0444, 836 for (idx = 0; idx < HASH_SIZE; idx++) {
852 dma_debug_dent, 837 struct hash_bucket *bucket = &dma_entry_hash[idx];
853 &global_disable); 838 struct dma_debug_entry *entry;
854 if (!global_disable_dent) 839 unsigned long flags;
855 goto out_err;
856
857 error_count_dent = debugfs_create_u32("error_count", 0444,
858 dma_debug_dent, &error_count);
859 if (!error_count_dent)
860 goto out_err;
861
862 show_all_errors_dent = debugfs_create_u32("all_errors", 0644,
863 dma_debug_dent,
864 &show_all_errors);
865 if (!show_all_errors_dent)
866 goto out_err;
867
868 show_num_errors_dent = debugfs_create_u32("num_errors", 0644,
869 dma_debug_dent,
870 &show_num_errors);
871 if (!show_num_errors_dent)
872 goto out_err;
873
874 num_free_entries_dent = debugfs_create_u32("num_free_entries", 0444,
875 dma_debug_dent,
876 &num_free_entries);
877 if (!num_free_entries_dent)
878 goto out_err;
879
880 min_free_entries_dent = debugfs_create_u32("min_free_entries", 0444,
881 dma_debug_dent,
882 &min_free_entries);
883 if (!min_free_entries_dent)
884 goto out_err;
885
886 nr_total_entries_dent = debugfs_create_u32("nr_total_entries", 0444,
887 dma_debug_dent,
888 &nr_total_entries);
889 if (!nr_total_entries_dent)
890 goto out_err;
891
892 filter_dent = debugfs_create_file("driver_filter", 0644,
893 dma_debug_dent, NULL, &filter_fops);
894 if (!filter_dent)
895 goto out_err;
896 840
841 spin_lock_irqsave(&bucket->lock, flags);
842 list_for_each_entry(entry, &bucket->list, list) {
843 seq_printf(seq,
844 "%s %s %s idx %d P=%llx N=%lx D=%llx L=%llx %s %s\n",
845 dev_name(entry->dev),
846 dev_driver_string(entry->dev),
847 type2name[entry->type], idx,
848 phys_addr(entry), entry->pfn,
849 entry->dev_addr, entry->size,
850 dir2name[entry->direction],
851 maperr2str[entry->map_err_type]);
852 }
853 spin_unlock_irqrestore(&bucket->lock, flags);
854 }
897 return 0; 855 return 0;
856}
857DEFINE_SHOW_ATTRIBUTE(dump);
898 858
899out_err: 859static void dma_debug_fs_init(void)
900 debugfs_remove_recursive(dma_debug_dent); 860{
901 861 struct dentry *dentry = debugfs_create_dir("dma-api", NULL);
902 return -ENOMEM; 862
863 debugfs_create_bool("disabled", 0444, dentry, &global_disable);
864 debugfs_create_u32("error_count", 0444, dentry, &error_count);
865 debugfs_create_u32("all_errors", 0644, dentry, &show_all_errors);
866 debugfs_create_u32("num_errors", 0644, dentry, &show_num_errors);
867 debugfs_create_u32("num_free_entries", 0444, dentry, &num_free_entries);
868 debugfs_create_u32("min_free_entries", 0444, dentry, &min_free_entries);
869 debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries);
870 debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops);
871 debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops);
903} 872}
904 873
905static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry) 874static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry)
@@ -985,12 +954,7 @@ static int dma_debug_init(void)
985 spin_lock_init(&dma_entry_hash[i].lock); 954 spin_lock_init(&dma_entry_hash[i].lock);
986 } 955 }
987 956
988 if (dma_debug_fs_init() != 0) { 957 dma_debug_fs_init();
989 pr_err("error creating debugfs entries - disabling\n");
990 global_disable = true;
991
992 return 0;
993 }
994 958
995 nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES); 959 nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES);
996 for (i = 0; i < nr_pages; ++i) 960 for (i = 0; i < nr_pages; ++i)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index d5bb51cf27c6..7ff807f2121c 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -355,6 +355,20 @@ out_unmap:
355} 355}
356EXPORT_SYMBOL(dma_direct_map_sg); 356EXPORT_SYMBOL(dma_direct_map_sg);
357 357
358dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
359 size_t size, enum dma_data_direction dir, unsigned long attrs)
360{
361 dma_addr_t dma_addr = paddr;
362
363 if (unlikely(!dma_direct_possible(dev, dma_addr, size))) {
364 report_addr(dev, dma_addr, size);
365 return DMA_MAPPING_ERROR;
366 }
367
368 return dma_addr;
369}
370EXPORT_SYMBOL(dma_direct_map_resource);
371
358/* 372/*
359 * Because 32-bit DMA masks are so common we expect every architecture to be 373 * Because 32-bit DMA masks are so common we expect every architecture to be
360 * able to satisfy them - either by not supporting more physical memory, or by 374 * able to satisfy them - either by not supporting more physical memory, or by
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 91ed81250fb3..bd62be80228e 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1654,42 +1654,6 @@ config PROVIDE_OHCI1394_DMA_INIT
1654 1654
1655 See Documentation/debugging-via-ohci1394.txt for more information. 1655 See Documentation/debugging-via-ohci1394.txt for more information.
1656 1656
1657config DMA_API_DEBUG
1658 bool "Enable debugging of DMA-API usage"
1659 select NEED_DMA_MAP_STATE
1660 help
1661 Enable this option to debug the use of the DMA API by device drivers.
1662 With this option you will be able to detect common bugs in device
1663 drivers like double-freeing of DMA mappings or freeing mappings that
1664 were never allocated.
1665
1666 This also attempts to catch cases where a page owned by DMA is
1667 accessed by the cpu in a way that could cause data corruption. For
1668 example, this enables cow_user_page() to check that the source page is
1669 not undergoing DMA.
1670
1671 This option causes a performance degradation. Use only if you want to
1672 debug device drivers and dma interactions.
1673
1674 If unsure, say N.
1675
1676config DMA_API_DEBUG_SG
1677 bool "Debug DMA scatter-gather usage"
1678 default y
1679 depends on DMA_API_DEBUG
1680 help
1681 Perform extra checking that callers of dma_map_sg() have respected the
1682 appropriate segment length/boundary limits for the given device when
1683 preparing DMA scatterlists.
1684
1685 This is particularly likely to have been overlooked in cases where the
1686 dma_map_sg() API is used for general bulk mapping of pages rather than
1687 preparing literal scatter-gather descriptors, where there is a risk of
1688 unexpected behaviour from DMA API implementations if the scatterlist
1689 is technically out-of-spec.
1690
1691 If unsure, say N.
1692
1693menuconfig RUNTIME_TESTING_MENU 1657menuconfig RUNTIME_TESTING_MENU
1694 bool "Runtime Testing" 1658 bool "Runtime Testing"
1695 def_bool y 1659 def_bool y