diff options
author | Zhang, Yanmin <yanmin_zhang@linux.intel.com> | 2006-03-22 03:08:50 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-03-22 10:54:03 -0500 |
commit | 8f860591ffb29738cf5539b6fbf27f50dcdeb380 (patch) | |
tree | 4265e45c4a79d86a16cd5175a836e8c531be8117 /mm/hugetlb.c | |
parent | aed75ff3caafce404d9be7f0c088716375be5279 (diff) |
[PATCH] Enable mprotect on huge pages
2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb
mprotect.
From: David Gibson <david@gibson.dropbear.id.au>
Remove a test from the mprotect() path which checks that the mprotect()ed
range on a hugepage VMA is hugepage aligned (yes, really, the sense of
is_aligned_hugepage_range() is the opposite of what you'd guess :-/).
In fact, we don't need this test. If the given addresses match the
beginning/end of a hugepage VMA they must already be suitably aligned. If
they don't, then mprotect_fixup() will attempt to split the VMA. The very
first test in split_vma() will check for a badly aligned address on a
hugepage VMA and return -EINVAL if necessary.
From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE. The
identify of hugetlb pte is lost when changing page protection via mprotect.
A page fault occurs later will trigger a bug check in huge_pte_alloc().
The fix is to always make new pte a hugetlb pte and also to clean up
legacy code where _PAGE_PRESENT is forced on in the pre-faulting day.
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r-- | mm/hugetlb.c | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 20117a4b8ab6..783098f6cf8e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c | |||
@@ -565,3 +565,32 @@ int follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
565 | 565 | ||
566 | return i; | 566 | return i; |
567 | } | 567 | } |
568 | |||
569 | void hugetlb_change_protection(struct vm_area_struct *vma, | ||
570 | unsigned long address, unsigned long end, pgprot_t newprot) | ||
571 | { | ||
572 | struct mm_struct *mm = vma->vm_mm; | ||
573 | unsigned long start = address; | ||
574 | pte_t *ptep; | ||
575 | pte_t pte; | ||
576 | |||
577 | BUG_ON(address >= end); | ||
578 | flush_cache_range(vma, address, end); | ||
579 | |||
580 | spin_lock(&mm->page_table_lock); | ||
581 | for (; address < end; address += HPAGE_SIZE) { | ||
582 | ptep = huge_pte_offset(mm, address); | ||
583 | if (!ptep) | ||
584 | continue; | ||
585 | if (!pte_none(*ptep)) { | ||
586 | pte = huge_ptep_get_and_clear(mm, address, ptep); | ||
587 | pte = pte_mkhuge(pte_modify(pte, newprot)); | ||
588 | set_huge_pte_at(mm, address, ptep, pte); | ||
589 | lazy_mmu_prot_update(pte); | ||
590 | } | ||
591 | } | ||
592 | spin_unlock(&mm->page_table_lock); | ||
593 | |||
594 | flush_tlb_range(vma, start, end); | ||
595 | } | ||
596 | |||