diff options
author | Zhang, Yanmin <yanmin_zhang@linux.intel.com> | 2006-03-22 03:08:50 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-03-22 10:54:03 -0500 |
commit | 8f860591ffb29738cf5539b6fbf27f50dcdeb380 (patch) | |
tree | 4265e45c4a79d86a16cd5175a836e8c531be8117 /include/linux | |
parent | aed75ff3caafce404d9be7f0c088716375be5279 (diff) |
[PATCH] Enable mprotect on huge pages
2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb
mprotect.
From: David Gibson <david@gibson.dropbear.id.au>
Remove a test from the mprotect() path which checks that the mprotect()ed
range on a hugepage VMA is hugepage aligned (yes, really, the sense of
is_aligned_hugepage_range() is the opposite of what you'd guess :-/).
In fact, we don't need this test. If the given addresses match the
beginning/end of a hugepage VMA they must already be suitably aligned. If
they don't, then mprotect_fixup() will attempt to split the VMA. The very
first test in split_vma() will check for a badly aligned address on a
hugepage VMA and return -EINVAL if necessary.
From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE. The
identify of hugetlb pte is lost when changing page protection via mprotect.
A page fault occurs later will trigger a bug check in huge_pte_alloc().
The fix is to always make new pte a hugetlb pte and also to clean up
legacy code where _PAGE_PRESENT is forced on in the pre-faulting day.
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/hugetlb.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 68d82ad6b17c..fa83836b63d2 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h | |||
@@ -41,6 +41,8 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, | |||
41 | pmd_t *pmd, int write); | 41 | pmd_t *pmd, int write); |
42 | int is_aligned_hugepage_range(unsigned long addr, unsigned long len); | 42 | int is_aligned_hugepage_range(unsigned long addr, unsigned long len); |
43 | int pmd_huge(pmd_t pmd); | 43 | int pmd_huge(pmd_t pmd); |
44 | void hugetlb_change_protection(struct vm_area_struct *vma, | ||
45 | unsigned long address, unsigned long end, pgprot_t newprot); | ||
44 | 46 | ||
45 | #ifndef ARCH_HAS_HUGEPAGE_ONLY_RANGE | 47 | #ifndef ARCH_HAS_HUGEPAGE_ONLY_RANGE |
46 | #define is_hugepage_only_range(mm, addr, len) 0 | 48 | #define is_hugepage_only_range(mm, addr, len) 0 |
@@ -101,6 +103,8 @@ static inline unsigned long hugetlb_total_pages(void) | |||
101 | #define free_huge_page(p) ({ (void)(p); BUG(); }) | 103 | #define free_huge_page(p) ({ (void)(p); BUG(); }) |
102 | #define hugetlb_fault(mm, vma, addr, write) ({ BUG(); 0; }) | 104 | #define hugetlb_fault(mm, vma, addr, write) ({ BUG(); 0; }) |
103 | 105 | ||
106 | #define hugetlb_change_protection(vma, address, end, newprot) | ||
107 | |||
104 | #ifndef HPAGE_MASK | 108 | #ifndef HPAGE_MASK |
105 | #define HPAGE_MASK PAGE_MASK /* Keep the compiler happy */ | 109 | #define HPAGE_MASK PAGE_MASK /* Keep the compiler happy */ |
106 | #define HPAGE_SIZE PAGE_SIZE | 110 | #define HPAGE_SIZE PAGE_SIZE |