diff options
author | Paul Mackerras <paulus@samba.org> | 2006-06-14 20:45:18 -0400 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2006-06-14 20:45:18 -0400 |
commit | bf72aeba2ffef599d1d386425c9e46b82be657cd (patch) | |
tree | ead8e5111dbcfa22e156999d1bb8a96e50f06fef /arch/powerpc/mm/tlb_64.c | |
parent | 31925323b1b51bb65db729e029472a8b1f635b7d (diff) |
powerpc: Use 64k pages without needing cache-inhibited large pages
Some POWER5+ machines can do 64k hardware pages for normal memory but
not for cache-inhibited pages. This patch lets us use 64k hardware
pages for most user processes on such machines (assuming the kernel
has been configured with CONFIG_PPC_64K_PAGES=y). User processes
start out using 64k pages and get switched to 4k pages if they use any
non-cacheable mappings.
With this, we use 64k pages for the vmalloc region and 4k pages for
the imalloc region. If anything creates a non-cacheable mapping in
the vmalloc region, the vmalloc region will get switched to 4k pages.
I don't know of any driver other than the DRM that would do this,
though, and these machines don't have AGP.
When a region gets switched from 64k pages to 4k pages, we do not have
to clear out all the 64k HPTEs from the hash table immediately. We
use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page
was hashed in as a 64k page or a set of 4k pages. If hash_page is
trying to insert a 4k page for a Linux PTE and it sees that it has
already been inserted as a 64k page, it first invalidates the 64k HPTE
before inserting the 4k HPTE. The hash invalidation routines also use
the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a
set of 4k HPTEs to remove. With those two changes, we can tolerate a
mix of 4k and 64k HPTEs in the hash table, and they will all get
removed when the address space is torn down.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/mm/tlb_64.c')
-rw-r--r-- | arch/powerpc/mm/tlb_64.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/arch/powerpc/mm/tlb_64.c b/arch/powerpc/mm/tlb_64.c index f734b11566c2..e7449b068c82 100644 --- a/arch/powerpc/mm/tlb_64.c +++ b/arch/powerpc/mm/tlb_64.c | |||
@@ -131,7 +131,7 @@ void hpte_update(struct mm_struct *mm, unsigned long addr, | |||
131 | { | 131 | { |
132 | struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); | 132 | struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); |
133 | unsigned long vsid; | 133 | unsigned long vsid; |
134 | unsigned int psize = mmu_virtual_psize; | 134 | unsigned int psize; |
135 | int i; | 135 | int i; |
136 | 136 | ||
137 | i = batch->index; | 137 | i = batch->index; |
@@ -148,7 +148,8 @@ void hpte_update(struct mm_struct *mm, unsigned long addr, | |||
148 | #else | 148 | #else |
149 | BUG(); | 149 | BUG(); |
150 | #endif | 150 | #endif |
151 | } | 151 | } else |
152 | psize = pte_pagesize_index(pte); | ||
152 | 153 | ||
153 | /* | 154 | /* |
154 | * This can happen when we are in the middle of a TLB batch and | 155 | * This can happen when we are in the middle of a TLB batch and |