diff options
Diffstat (limited to 'Documentation/cachetlb.txt')
-rw-r--r-- | Documentation/cachetlb.txt | 36 |
1 files changed, 30 insertions, 6 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt index da42ab414c48..9164ae3b83bc 100644 --- a/Documentation/cachetlb.txt +++ b/Documentation/cachetlb.txt | |||
@@ -5,7 +5,7 @@ | |||
5 | 5 | ||
6 | This document describes the cache/tlb flushing interfaces called | 6 | This document describes the cache/tlb flushing interfaces called |
7 | by the Linux VM subsystem. It enumerates over each interface, | 7 | by the Linux VM subsystem. It enumerates over each interface, |
8 | describes it's intended purpose, and what side effect is expected | 8 | describes its intended purpose, and what side effect is expected |
9 | after the interface is invoked. | 9 | after the interface is invoked. |
10 | 10 | ||
11 | The side effects described below are stated for a uniprocessor | 11 | The side effects described below are stated for a uniprocessor |
@@ -88,12 +88,12 @@ changes occur: | |||
88 | This is used primarily during fault processing. | 88 | This is used primarily during fault processing. |
89 | 89 | ||
90 | 5) void update_mmu_cache(struct vm_area_struct *vma, | 90 | 5) void update_mmu_cache(struct vm_area_struct *vma, |
91 | unsigned long address, pte_t pte) | 91 | unsigned long address, pte_t *ptep) |
92 | 92 | ||
93 | At the end of every page fault, this routine is invoked to | 93 | At the end of every page fault, this routine is invoked to |
94 | tell the architecture specific code that a translation | 94 | tell the architecture specific code that a translation |
95 | described by "pte" now exists at virtual address "address" | 95 | now exists at virtual address "address" for address space |
96 | for address space "vma->vm_mm", in the software page tables. | 96 | "vma->vm_mm", in the software page tables. |
97 | 97 | ||
98 | A port may use this information in any way it so chooses. | 98 | A port may use this information in any way it so chooses. |
99 | For example, it could use this event to pre-load TLB | 99 | For example, it could use this event to pre-load TLB |
@@ -231,7 +231,7 @@ require a whole different set of interfaces to handle properly. | |||
231 | The biggest problem is that of virtual aliasing in the data cache | 231 | The biggest problem is that of virtual aliasing in the data cache |
232 | of a processor. | 232 | of a processor. |
233 | 233 | ||
234 | Is your port susceptible to virtual aliasing in it's D-cache? | 234 | Is your port susceptible to virtual aliasing in its D-cache? |
235 | Well, if your D-cache is virtually indexed, is larger in size than | 235 | Well, if your D-cache is virtually indexed, is larger in size than |
236 | PAGE_SIZE, and does not prevent multiple cache lines for the same | 236 | PAGE_SIZE, and does not prevent multiple cache lines for the same |
237 | physical address from existing at once, you have this problem. | 237 | physical address from existing at once, you have this problem. |
@@ -249,7 +249,7 @@ one way to solve this (in particular SPARC_FLAG_MMAPSHARED). | |||
249 | Next, you have to solve the D-cache aliasing issue for all | 249 | Next, you have to solve the D-cache aliasing issue for all |
250 | other cases. Please keep in mind that fact that, for a given page | 250 | other cases. Please keep in mind that fact that, for a given page |
251 | mapped into some user address space, there is always at least one more | 251 | mapped into some user address space, there is always at least one more |
252 | mapping, that of the kernel in it's linear mapping starting at | 252 | mapping, that of the kernel in its linear mapping starting at |
253 | PAGE_OFFSET. So immediately, once the first user maps a given | 253 | PAGE_OFFSET. So immediately, once the first user maps a given |
254 | physical page into its address space, by implication the D-cache | 254 | physical page into its address space, by implication the D-cache |
255 | aliasing problem has the potential to exist since the kernel already | 255 | aliasing problem has the potential to exist since the kernel already |
@@ -377,3 +377,27 @@ maps this page at its virtual address. | |||
377 | All the functionality of flush_icache_page can be implemented in | 377 | All the functionality of flush_icache_page can be implemented in |
378 | flush_dcache_page and update_mmu_cache. In 2.7 the hope is to | 378 | flush_dcache_page and update_mmu_cache. In 2.7 the hope is to |
379 | remove this interface completely. | 379 | remove this interface completely. |
380 | |||
381 | The final category of APIs is for I/O to deliberately aliased address | ||
382 | ranges inside the kernel. Such aliases are set up by use of the | ||
383 | vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O | ||
384 | subsystem assumes that the user mapping and kernel offset mapping are | ||
385 | the only aliases. This isn't true for vmap aliases, so anything in | ||
386 | the kernel trying to do I/O to vmap areas must manually manage | ||
387 | coherency. It must do this by flushing the vmap range before doing | ||
388 | I/O and invalidating it after the I/O returns. | ||
389 | |||
390 | void flush_kernel_vmap_range(void *vaddr, int size) | ||
391 | flushes the kernel cache for a given virtual address range in | ||
392 | the vmap area. This is to make sure that any data the kernel | ||
393 | modified in the vmap range is made visible to the physical | ||
394 | page. The design is to make this area safe to perform I/O on. | ||
395 | Note that this API does *not* also flush the offset map alias | ||
396 | of the area. | ||
397 | |||
398 | void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates | ||
399 | the cache for a given virtual address range in the vmap area | ||
400 | which prevents the processor from making the cache stale by | ||
401 | speculatively reading data while the I/O was occurring to the | ||
402 | physical pages. This is only necessary for data reads into the | ||
403 | vmap area. | ||