aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJames Bottomley <James.Bottomley@suse.de>2010-01-25 12:42:20 -0500
committerJames Bottomley <James.Bottomley@suse.de>2010-01-25 12:42:20 -0500
commit9df5f74194871ebd0e51ef5ad2eca5084acaaaba (patch)
treee167b9ec3a7948e0706754de4a303dc018ec9817
parent6b7b284958d47b77d06745b36bc7f36dab769d9b (diff)
mm: add coherence API for DMA to vmalloc/vmap areas
On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-rw-r--r--Documentation/cachetlb.txt24
-rw-r--r--include/linux/highmem.h6
2 files changed, 30 insertions, 0 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index da42ab414c48..b231414bb8bc 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -377,3 +377,27 @@ maps this page at its virtual address.
377 All the functionality of flush_icache_page can be implemented in 377 All the functionality of flush_icache_page can be implemented in
378 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to 378 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
379 remove this interface completely. 379 remove this interface completely.
380
381The final category of APIs is for I/O to deliberately aliased address
382ranges inside the kernel. Such aliases are set up by use of the
383vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
384subsystem assumes that the user mapping and kernel offset mapping are
385the only aliases. This isn't true for vmap aliases, so anything in
386the kernel trying to do I/O to vmap areas must manually manage
387coherency. It must do this by flushing the vmap range before doing
388I/O and invalidating it after the I/O returns.
389
390 void flush_kernel_vmap_range(void *vaddr, int size)
391 flushes the kernel cache for a given virtual address range in
392 the vmap area. This is to make sure that any data the kernel
393 modified in the vmap range is made visible to the physical
394 page. The design is to make this area safe to perform I/O on.
395 Note that this API does *not* also flush the offset map alias
396 of the area.
397
398 void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates
399 the cache for a given virtual address range in the vmap area
400 which prevents the processor from making the cache stale by
401 speculatively reading data while the I/O was occurring to the
402 physical pages. This is only necessary for data reads into the
403 vmap area.
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff4497269..adfe1013b2bd 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
17static inline void flush_kernel_dcache_page(struct page *page) 17static inline void flush_kernel_dcache_page(struct page *page)
18{ 18{
19} 19}
20static inline void flush_kernel_vmap_range(void *vaddr, int size)
21{
22}
23static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
24{
25}
20#endif 26#endif
21 27
22#include <asm/kmap_types.h> 28#include <asm/kmap_types.h>