aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/cachetlb.txt
diff options
context:
space:
mode:
authorBenjamin Herrenschmidt <benh@kernel.crashing.org>2007-10-19 02:39:14 -0400
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-10-19 14:53:34 -0400
commit1c7037db50ebecf3d5cfbf7082daa5d97d900fef (patch)
tree1843c417160b79c3f79a54d546ddcf5ccdb1b44b /Documentation/cachetlb.txt
parent22124c9999f00340b062fff740db30187bf18454 (diff)
remove unused flush_tlb_pgtables
Nobody uses flush_tlb_pgtables anymore, this patch removes all remaining traces of it from all archs. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/cachetlb.txt')
-rw-r--r--Documentation/cachetlb.txt27
1 files changed, 2 insertions, 25 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index 552cabac0608..da42ab414c48 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -87,30 +87,7 @@ changes occur:
87 87
88 This is used primarily during fault processing. 88 This is used primarily during fault processing.
89 89
905) void flush_tlb_pgtables(struct mm_struct *mm, 905) void update_mmu_cache(struct vm_area_struct *vma,
91 unsigned long start, unsigned long end)
92
93 The software page tables for address space 'mm' for virtual
94 addresses in the range 'start' to 'end-1' are being torn down.
95
96 Some platforms cache the lowest level of the software page tables
97 in a linear virtually mapped array, to make TLB miss processing
98 more efficient. On such platforms, since the TLB is caching the
99 software page table structure, it needs to be flushed when parts
100 of the software page table tree are unlinked/freed.
101
102 Sparc64 is one example of a platform which does this.
103
104 Usually, when munmap()'ing an area of user virtual address
105 space, the kernel leaves the page table parts around and just
106 marks the individual pte's as invalid. However, if very large
107 portions of the address space are unmapped, the kernel frees up
108 those portions of the software page tables to prevent potential
109 excessive kernel memory usage caused by erratic mmap/mmunmap
110 sequences. It is at these times that flush_tlb_pgtables will
111 be invoked.
112
1136) void update_mmu_cache(struct vm_area_struct *vma,
114 unsigned long address, pte_t pte) 91 unsigned long address, pte_t pte)
115 92
116 At the end of every page fault, this routine is invoked to 93 At the end of every page fault, this routine is invoked to
@@ -123,7 +100,7 @@ changes occur:
123 translations for software managed TLB configurations. 100 translations for software managed TLB configurations.
124 The sparc64 port currently does this. 101 The sparc64 port currently does this.
125 102
1267) void tlb_migrate_finish(struct mm_struct *mm) 1036) void tlb_migrate_finish(struct mm_struct *mm)
127 104
128 This interface is called at the end of an explicit 105 This interface is called at the end of an explicit
129 process migration. This interface provides a hook 106 process migration. This interface provides a hook