diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-10-29 21:16:02 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-10-30 00:40:37 -0400 |
commit | 4d6ddfa9242bc3d27fb0f7248f6fdee0299c731f (patch) | |
tree | da5b753df64e7163a35487005e50a3b90b0b0b9b /include/asm-generic | |
parent | 15a23ffa2fc91cebdac44d4aee994f59d5c28dc0 (diff) |
[PATCH] mm: tlb_is_full_mm was obscure
tlb_is_full_mm? What does that mean? The TLB is full? No, it means that the
mm's last user has gone and the whole mm is being torn down. And it's an
inline function because sparc64 uses a different (slightly better)
"tlb_frozen" name for the flag others call "fullmm".
And now the ptep_get_and_clear_full macro used in zap_pte_range refers
directly to tlb->fullmm, which would be wrong for sparc64. Rather than
correct that, I'd prefer to scrap tlb_is_full_mm altogether, and change
sparc64 to just use the same poor name as everyone else - is that okay?
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r-- | include/asm-generic/tlb.h | 6 |
1 files changed, 0 insertions, 6 deletions
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index c8232622c8d9..5d352a70f004 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h | |||
@@ -103,12 +103,6 @@ tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) | |||
103 | put_cpu_var(mmu_gathers); | 103 | put_cpu_var(mmu_gathers); |
104 | } | 104 | } |
105 | 105 | ||
106 | static inline unsigned int | ||
107 | tlb_is_full_mm(struct mmu_gather *tlb) | ||
108 | { | ||
109 | return tlb->fullmm; | ||
110 | } | ||
111 | |||
112 | /* tlb_remove_page | 106 | /* tlb_remove_page |
113 | * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while | 107 | * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while |
114 | * handling the additional races in SMP caused by other CPUs caching valid | 108 | * handling the additional races in SMP caused by other CPUs caching valid |