diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-04-19 16:29:16 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org.(none)> | 2005-04-19 16:29:16 -0400 |
commit | 3bf5ee95648c694bac4d13529563c230cd4fe5f2 (patch) | |
tree | 9430e6e4f4c3d586ecb7375cd780fd17694888c7 /arch/ppc64/mm | |
parent | ee39b37b23da0b6ec53a8ebe90ff41c016f8ae27 (diff) |
[PATCH] freepgt: hugetlb_free_pgd_range
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
called, and it wasn't obvious what to do about them.
The ppc64 case turns out to be easy: the associated tables are noted elsewhere
and freed later, safe to either skip its hugetlb areas or go through the
motions of freeing nothing. Since ia64 does need a special case, restore to
ppc64 the special case of skipping them.
The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
probably appeared to work okay if you just had one such area; in fact it's
been broken much longer if you consider a long munmap spanning from another
region into the hugetlb region.
In the ia64 hugetlb region, more virtual address bits are available than in
the other regions, yet the page tables are structured the same way: the page
at the bottom is larger. Here we need to scale down each addr before passing
it to the standard free_pgd_range. Was about to write a hugely_scaled_down
macro, but found htlbpage_to_page already exists for just this purpose. Fixed
off-by-one in ia64 is_hugepage_only_range.
Uninline free_pgd_range to make it available to ia64. Make sure the
vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
other (safe to join huges? probably but don't bother).
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/ppc64/mm')
-rw-r--r-- | arch/ppc64/mm/hugetlbpage.c | 10 |
1 files changed, 0 insertions, 10 deletions
diff --git a/arch/ppc64/mm/hugetlbpage.c b/arch/ppc64/mm/hugetlbpage.c index c62ddaff0720..8665bb57e42b 100644 --- a/arch/ppc64/mm/hugetlbpage.c +++ b/arch/ppc64/mm/hugetlbpage.c | |||
@@ -430,16 +430,6 @@ void unmap_hugepage_range(struct vm_area_struct *vma, | |||
430 | flush_tlb_pending(); | 430 | flush_tlb_pending(); |
431 | } | 431 | } |
432 | 432 | ||
433 | void hugetlb_free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *prev, | ||
434 | unsigned long start, unsigned long end) | ||
435 | { | ||
436 | /* Because the huge pgtables are only 2 level, they can take | ||
437 | * at most around 4M, much less than one hugepage which the | ||
438 | * process is presumably entitled to use. So we don't bother | ||
439 | * freeing up the pagetables on unmap, and wait until | ||
440 | * destroy_context() to clean up the lot. */ | ||
441 | } | ||
442 | |||
443 | int hugetlb_prefault(struct address_space *mapping, struct vm_area_struct *vma) | 433 | int hugetlb_prefault(struct address_space *mapping, struct vm_area_struct *vma) |
444 | { | 434 | { |
445 | struct mm_struct *mm = current->mm; | 435 | struct mm_struct *mm = current->mm; |