From 31eedd823c1bf3650c450346a0d0c39431034eb9 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 15 Feb 2008 17:29:12 +0100 Subject: x86: zap invalid and unused pmds in early boot The early boot code maps KERNEL_TEXT_SIZE (currently 40MB) starting from __START_KERNEL_map. The kernel itself only needs _text to _end mapped in the high alias. On relocatible kernels the ASM setup code adjusts the compile time created high mappings to the relocation. This creates invalid pmd entries for negative offsets: 0xffffffff80000000 -> pmd entry: ffffffffff2001e3 It points outside of the physical address space and is marked present. This starts at the virtual address __START_KERNEL_map and goes up to the point where the first valid physical address (0x0) is mapped. Zap the mappings before _text and after _end right away in early boot. This removes also the invalid entries. Furthermore it simplifies the range check for high aliases. Signed-off-by: Thomas Gleixner Acked-by: H. Peter Anvin Signed-off-by: Ingo Molnar --- include/asm-x86/pgtable_64.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h index bd4740a60f29..7fd5e0e2361e 100644 --- a/include/asm-x86/pgtable_64.h +++ b/include/asm-x86/pgtable_64.h @@ -246,6 +246,7 @@ static inline int pud_large(pud_t pte) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) extern int kern_addr_valid(unsigned long addr); +extern void cleanup_highmap(void); #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) -- cgit v1.2.2