diff options
Diffstat (limited to 'Documentation/cachetlb.txt')
-rw-r--r-- | Documentation/cachetlb.txt | 384 |
1 files changed, 384 insertions, 0 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt new file mode 100644 index 000000000000..e132fb1163b0 --- /dev/null +++ b/Documentation/cachetlb.txt | |||
@@ -0,0 +1,384 @@ | |||
1 | Cache and TLB Flushing | ||
2 | Under Linux | ||
3 | |||
4 | David S. Miller <davem@redhat.com> | ||
5 | |||
6 | This document describes the cache/tlb flushing interfaces called | ||
7 | by the Linux VM subsystem. It enumerates over each interface, | ||
8 | describes it's intended purpose, and what side effect is expected | ||
9 | after the interface is invoked. | ||
10 | |||
11 | The side effects described below are stated for a uniprocessor | ||
12 | implementation, and what is to happen on that single processor. The | ||
13 | SMP cases are a simple extension, in that you just extend the | ||
14 | definition such that the side effect for a particular interface occurs | ||
15 | on all processors in the system. Don't let this scare you into | ||
16 | thinking SMP cache/tlb flushing must be so inefficient, this is in | ||
17 | fact an area where many optimizations are possible. For example, | ||
18 | if it can be proven that a user address space has never executed | ||
19 | on a cpu (see vma->cpu_vm_mask), one need not perform a flush | ||
20 | for this address space on that cpu. | ||
21 | |||
22 | First, the TLB flushing interfaces, since they are the simplest. The | ||
23 | "TLB" is abstracted under Linux as something the cpu uses to cache | ||
24 | virtual-->physical address translations obtained from the software | ||
25 | page tables. Meaning that if the software page tables change, it is | ||
26 | possible for stale translations to exist in this "TLB" cache. | ||
27 | Therefore when software page table changes occur, the kernel will | ||
28 | invoke one of the following flush methods _after_ the page table | ||
29 | changes occur: | ||
30 | |||
31 | 1) void flush_tlb_all(void) | ||
32 | |||
33 | The most severe flush of all. After this interface runs, | ||
34 | any previous page table modification whatsoever will be | ||
35 | visible to the cpu. | ||
36 | |||
37 | This is usually invoked when the kernel page tables are | ||
38 | changed, since such translations are "global" in nature. | ||
39 | |||
40 | 2) void flush_tlb_mm(struct mm_struct *mm) | ||
41 | |||
42 | This interface flushes an entire user address space from | ||
43 | the TLB. After running, this interface must make sure that | ||
44 | any previous page table modifications for the address space | ||
45 | 'mm' will be visible to the cpu. That is, after running, | ||
46 | there will be no entries in the TLB for 'mm'. | ||
47 | |||
48 | This interface is used to handle whole address space | ||
49 | page table operations such as what happens during | ||
50 | fork, and exec. | ||
51 | |||
52 | Platform developers note that generic code will always | ||
53 | invoke this interface without mm->page_table_lock held. | ||
54 | |||
55 | 3) void flush_tlb_range(struct vm_area_struct *vma, | ||
56 | unsigned long start, unsigned long end) | ||
57 | |||
58 | Here we are flushing a specific range of (user) virtual | ||
59 | address translations from the TLB. After running, this | ||
60 | interface must make sure that any previous page table | ||
61 | modifications for the address space 'vma->vm_mm' in the range | ||
62 | 'start' to 'end-1' will be visible to the cpu. That is, after | ||
63 | running, here will be no entries in the TLB for 'mm' for | ||
64 | virtual addresses in the range 'start' to 'end-1'. | ||
65 | |||
66 | The "vma" is the backing store being used for the region. | ||
67 | Primarily, this is used for munmap() type operations. | ||
68 | |||
69 | The interface is provided in hopes that the port can find | ||
70 | a suitably efficient method for removing multiple page | ||
71 | sized translations from the TLB, instead of having the kernel | ||
72 | call flush_tlb_page (see below) for each entry which may be | ||
73 | modified. | ||
74 | |||
75 | Platform developers note that generic code will always | ||
76 | invoke this interface with mm->page_table_lock held. | ||
77 | |||
78 | 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) | ||
79 | |||
80 | This time we need to remove the PAGE_SIZE sized translation | ||
81 | from the TLB. The 'vma' is the backing structure used by | ||
82 | Linux to keep track of mmap'd regions for a process, the | ||
83 | address space is available via vma->vm_mm. Also, one may | ||
84 | test (vma->vm_flags & VM_EXEC) to see if this region is | ||
85 | executable (and thus could be in the 'instruction TLB' in | ||
86 | split-tlb type setups). | ||
87 | |||
88 | After running, this interface must make sure that any previous | ||
89 | page table modification for address space 'vma->vm_mm' for | ||
90 | user virtual address 'addr' will be visible to the cpu. That | ||
91 | is, after running, there will be no entries in the TLB for | ||
92 | 'vma->vm_mm' for virtual address 'addr'. | ||
93 | |||
94 | This is used primarily during fault processing. | ||
95 | |||
96 | Platform developers note that generic code will always | ||
97 | invoke this interface with mm->page_table_lock held. | ||
98 | |||
99 | 5) void flush_tlb_pgtables(struct mm_struct *mm, | ||
100 | unsigned long start, unsigned long end) | ||
101 | |||
102 | The software page tables for address space 'mm' for virtual | ||
103 | addresses in the range 'start' to 'end-1' are being torn down. | ||
104 | |||
105 | Some platforms cache the lowest level of the software page tables | ||
106 | in a linear virtually mapped array, to make TLB miss processing | ||
107 | more efficient. On such platforms, since the TLB is caching the | ||
108 | software page table structure, it needs to be flushed when parts | ||
109 | of the software page table tree are unlinked/freed. | ||
110 | |||
111 | Sparc64 is one example of a platform which does this. | ||
112 | |||
113 | Usually, when munmap()'ing an area of user virtual address | ||
114 | space, the kernel leaves the page table parts around and just | ||
115 | marks the individual pte's as invalid. However, if very large | ||
116 | portions of the address space are unmapped, the kernel frees up | ||
117 | those portions of the software page tables to prevent potential | ||
118 | excessive kernel memory usage caused by erratic mmap/mmunmap | ||
119 | sequences. It is at these times that flush_tlb_pgtables will | ||
120 | be invoked. | ||
121 | |||
122 | 6) void update_mmu_cache(struct vm_area_struct *vma, | ||
123 | unsigned long address, pte_t pte) | ||
124 | |||
125 | At the end of every page fault, this routine is invoked to | ||
126 | tell the architecture specific code that a translation | ||
127 | described by "pte" now exists at virtual address "address" | ||
128 | for address space "vma->vm_mm", in the software page tables. | ||
129 | |||
130 | A port may use this information in any way it so chooses. | ||
131 | For example, it could use this event to pre-load TLB | ||
132 | translations for software managed TLB configurations. | ||
133 | The sparc64 port currently does this. | ||
134 | |||
135 | 7) void tlb_migrate_finish(struct mm_struct *mm) | ||
136 | |||
137 | This interface is called at the end of an explicit | ||
138 | process migration. This interface provides a hook | ||
139 | to allow a platform to update TLB or context-specific | ||
140 | information for the address space. | ||
141 | |||
142 | The ia64 sn2 platform is one example of a platform | ||
143 | that uses this interface. | ||
144 | |||
145 | 8) void lazy_mmu_prot_update(pte_t pte) | ||
146 | This interface is called whenever the protection on | ||
147 | any user PTEs change. This interface provides a notification | ||
148 | to architecture specific code to take appropiate action. | ||
149 | |||
150 | |||
151 | Next, we have the cache flushing interfaces. In general, when Linux | ||
152 | is changing an existing virtual-->physical mapping to a new value, | ||
153 | the sequence will be in one of the following forms: | ||
154 | |||
155 | 1) flush_cache_mm(mm); | ||
156 | change_all_page_tables_of(mm); | ||
157 | flush_tlb_mm(mm); | ||
158 | |||
159 | 2) flush_cache_range(vma, start, end); | ||
160 | change_range_of_page_tables(mm, start, end); | ||
161 | flush_tlb_range(vma, start, end); | ||
162 | |||
163 | 3) flush_cache_page(vma, addr, pfn); | ||
164 | set_pte(pte_pointer, new_pte_val); | ||
165 | flush_tlb_page(vma, addr); | ||
166 | |||
167 | The cache level flush will always be first, because this allows | ||
168 | us to properly handle systems whose caches are strict and require | ||
169 | a virtual-->physical translation to exist for a virtual address | ||
170 | when that virtual address is flushed from the cache. The HyperSparc | ||
171 | cpu is one such cpu with this attribute. | ||
172 | |||
173 | The cache flushing routines below need only deal with cache flushing | ||
174 | to the extent that it is necessary for a particular cpu. Mostly, | ||
175 | these routines must be implemented for cpus which have virtually | ||
176 | indexed caches which must be flushed when virtual-->physical | ||
177 | translations are changed or removed. So, for example, the physically | ||
178 | indexed physically tagged caches of IA32 processors have no need to | ||
179 | implement these interfaces since the caches are fully synchronized | ||
180 | and have no dependency on translation information. | ||
181 | |||
182 | Here are the routines, one by one: | ||
183 | |||
184 | 1) void flush_cache_mm(struct mm_struct *mm) | ||
185 | |||
186 | This interface flushes an entire user address space from | ||
187 | the caches. That is, after running, there will be no cache | ||
188 | lines associated with 'mm'. | ||
189 | |||
190 | This interface is used to handle whole address space | ||
191 | page table operations such as what happens during | ||
192 | fork, exit, and exec. | ||
193 | |||
194 | 2) void flush_cache_range(struct vm_area_struct *vma, | ||
195 | unsigned long start, unsigned long end) | ||
196 | |||
197 | Here we are flushing a specific range of (user) virtual | ||
198 | addresses from the cache. After running, there will be no | ||
199 | entries in the cache for 'vma->vm_mm' for virtual addresses in | ||
200 | the range 'start' to 'end-1'. | ||
201 | |||
202 | The "vma" is the backing store being used for the region. | ||
203 | Primarily, this is used for munmap() type operations. | ||
204 | |||
205 | The interface is provided in hopes that the port can find | ||
206 | a suitably efficient method for removing multiple page | ||
207 | sized regions from the cache, instead of having the kernel | ||
208 | call flush_cache_page (see below) for each entry which may be | ||
209 | modified. | ||
210 | |||
211 | 3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) | ||
212 | |||
213 | This time we need to remove a PAGE_SIZE sized range | ||
214 | from the cache. The 'vma' is the backing structure used by | ||
215 | Linux to keep track of mmap'd regions for a process, the | ||
216 | address space is available via vma->vm_mm. Also, one may | ||
217 | test (vma->vm_flags & VM_EXEC) to see if this region is | ||
218 | executable (and thus could be in the 'instruction cache' in | ||
219 | "Harvard" type cache layouts). | ||
220 | |||
221 | The 'pfn' indicates the physical page frame (shift this value | ||
222 | left by PAGE_SHIFT to get the physical address) that 'addr' | ||
223 | translates to. It is this mapping which should be removed from | ||
224 | the cache. | ||
225 | |||
226 | After running, there will be no entries in the cache for | ||
227 | 'vma->vm_mm' for virtual address 'addr' which translates | ||
228 | to 'pfn'. | ||
229 | |||
230 | This is used primarily during fault processing. | ||
231 | |||
232 | 4) void flush_cache_kmaps(void) | ||
233 | |||
234 | This routine need only be implemented if the platform utilizes | ||
235 | highmem. It will be called right before all of the kmaps | ||
236 | are invalidated. | ||
237 | |||
238 | After running, there will be no entries in the cache for | ||
239 | the kernel virtual address range PKMAP_ADDR(0) to | ||
240 | PKMAP_ADDR(LAST_PKMAP). | ||
241 | |||
242 | This routing should be implemented in asm/highmem.h | ||
243 | |||
244 | 5) void flush_cache_vmap(unsigned long start, unsigned long end) | ||
245 | void flush_cache_vunmap(unsigned long start, unsigned long end) | ||
246 | |||
247 | Here in these two interfaces we are flushing a specific range | ||
248 | of (kernel) virtual addresses from the cache. After running, | ||
249 | there will be no entries in the cache for the kernel address | ||
250 | space for virtual addresses in the range 'start' to 'end-1'. | ||
251 | |||
252 | The first of these two routines is invoked after map_vm_area() | ||
253 | has installed the page table entries. The second is invoked | ||
254 | before unmap_vm_area() deletes the page table entries. | ||
255 | |||
256 | There exists another whole class of cpu cache issues which currently | ||
257 | require a whole different set of interfaces to handle properly. | ||
258 | The biggest problem is that of virtual aliasing in the data cache | ||
259 | of a processor. | ||
260 | |||
261 | Is your port susceptible to virtual aliasing in it's D-cache? | ||
262 | Well, if your D-cache is virtually indexed, is larger in size than | ||
263 | PAGE_SIZE, and does not prevent multiple cache lines for the same | ||
264 | physical address from existing at once, you have this problem. | ||
265 | |||
266 | If your D-cache has this problem, first define asm/shmparam.h SHMLBA | ||
267 | properly, it should essentially be the size of your virtually | ||
268 | addressed D-cache (or if the size is variable, the largest possible | ||
269 | size). This setting will force the SYSv IPC layer to only allow user | ||
270 | processes to mmap shared memory at address which are a multiple of | ||
271 | this value. | ||
272 | |||
273 | NOTE: This does not fix shared mmaps, check out the sparc64 port for | ||
274 | one way to solve this (in particular SPARC_FLAG_MMAPSHARED). | ||
275 | |||
276 | Next, you have to solve the D-cache aliasing issue for all | ||
277 | other cases. Please keep in mind that fact that, for a given page | ||
278 | mapped into some user address space, there is always at least one more | ||
279 | mapping, that of the kernel in it's linear mapping starting at | ||
280 | PAGE_OFFSET. So immediately, once the first user maps a given | ||
281 | physical page into its address space, by implication the D-cache | ||
282 | aliasing problem has the potential to exist since the kernel already | ||
283 | maps this page at its virtual address. | ||
284 | |||
285 | void copy_user_page(void *to, void *from, unsigned long addr, struct page *page) | ||
286 | void clear_user_page(void *to, unsigned long addr, struct page *page) | ||
287 | |||
288 | These two routines store data in user anonymous or COW | ||
289 | pages. It allows a port to efficiently avoid D-cache alias | ||
290 | issues between userspace and the kernel. | ||
291 | |||
292 | For example, a port may temporarily map 'from' and 'to' to | ||
293 | kernel virtual addresses during the copy. The virtual address | ||
294 | for these two pages is chosen in such a way that the kernel | ||
295 | load/store instructions happen to virtual addresses which are | ||
296 | of the same "color" as the user mapping of the page. Sparc64 | ||
297 | for example, uses this technique. | ||
298 | |||
299 | The 'addr' parameter tells the virtual address where the | ||
300 | user will ultimately have this page mapped, and the 'page' | ||
301 | parameter gives a pointer to the struct page of the target. | ||
302 | |||
303 | If D-cache aliasing is not an issue, these two routines may | ||
304 | simply call memcpy/memset directly and do nothing more. | ||
305 | |||
306 | void flush_dcache_page(struct page *page) | ||
307 | |||
308 | Any time the kernel writes to a page cache page, _OR_ | ||
309 | the kernel is about to read from a page cache page and | ||
310 | user space shared/writable mappings of this page potentially | ||
311 | exist, this routine is called. | ||
312 | |||
313 | NOTE: This routine need only be called for page cache pages | ||
314 | which can potentially ever be mapped into the address | ||
315 | space of a user process. So for example, VFS layer code | ||
316 | handling vfs symlinks in the page cache need not call | ||
317 | this interface at all. | ||
318 | |||
319 | The phrase "kernel writes to a page cache page" means, | ||
320 | specifically, that the kernel executes store instructions | ||
321 | that dirty data in that page at the page->virtual mapping | ||
322 | of that page. It is important to flush here to handle | ||
323 | D-cache aliasing, to make sure these kernel stores are | ||
324 | visible to user space mappings of that page. | ||
325 | |||
326 | The corollary case is just as important, if there are users | ||
327 | which have shared+writable mappings of this file, we must make | ||
328 | sure that kernel reads of these pages will see the most recent | ||
329 | stores done by the user. | ||
330 | |||
331 | If D-cache aliasing is not an issue, this routine may | ||
332 | simply be defined as a nop on that architecture. | ||
333 | |||
334 | There is a bit set aside in page->flags (PG_arch_1) as | ||
335 | "architecture private". The kernel guarantees that, | ||
336 | for pagecache pages, it will clear this bit when such | ||
337 | a page first enters the pagecache. | ||
338 | |||
339 | This allows these interfaces to be implemented much more | ||
340 | efficiently. It allows one to "defer" (perhaps indefinitely) | ||
341 | the actual flush if there are currently no user processes | ||
342 | mapping this page. See sparc64's flush_dcache_page and | ||
343 | update_mmu_cache implementations for an example of how to go | ||
344 | about doing this. | ||
345 | |||
346 | The idea is, first at flush_dcache_page() time, if | ||
347 | page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear | ||
348 | an empty list, just mark the architecture private page flag bit. | ||
349 | Later, in update_mmu_cache(), a check is made of this flag bit, | ||
350 | and if set the flush is done and the flag bit is cleared. | ||
351 | |||
352 | IMPORTANT NOTE: It is often important, if you defer the flush, | ||
353 | that the actual flush occurs on the same CPU | ||
354 | as did the cpu stores into the page to make it | ||
355 | dirty. Again, see sparc64 for examples of how | ||
356 | to deal with this. | ||
357 | |||
358 | void copy_to_user_page(struct vm_area_struct *vma, struct page *page, | ||
359 | unsigned long user_vaddr, | ||
360 | void *dst, void *src, int len) | ||
361 | void copy_from_user_page(struct vm_area_struct *vma, struct page *page, | ||
362 | unsigned long user_vaddr, | ||
363 | void *dst, void *src, int len) | ||
364 | When the kernel needs to copy arbitrary data in and out | ||
365 | of arbitrary user pages (f.e. for ptrace()) it will use | ||
366 | these two routines. | ||
367 | |||
368 | Any necessary cache flushing or other coherency operations | ||
369 | that need to occur should happen here. If the processor's | ||
370 | instruction cache does not snoop cpu stores, it is very | ||
371 | likely that you will need to flush the instruction cache | ||
372 | for copy_to_user_page(). | ||
373 | |||
374 | void flush_icache_range(unsigned long start, unsigned long end) | ||
375 | When the kernel stores into addresses that it will execute | ||
376 | out of (eg when loading modules), this function is called. | ||
377 | |||
378 | If the icache does not snoop stores then this routine will need | ||
379 | to flush it. | ||
380 | |||
381 | void flush_icache_page(struct vm_area_struct *vma, struct page *page) | ||
382 | All the functionality of flush_icache_page can be implemented in | ||
383 | flush_dcache_page and update_mmu_cache. In 2.7 the hope is to | ||
384 | remove this interface completely. | ||