summaryrefslogtreecommitdiffstats
path: root/Documentation/x86
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-05-02 02:54:56 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2017-05-02 02:54:56 -0400
commitd3b5d35290d729a2518af00feca867385a1b08fa (patch)
tree7b56c0863d59bc57f7c7dcf5d5665c56b05f1d1b /Documentation/x86
parentaa2a4b6569d5b10491b606a86e574dff3852597a (diff)
parent71389703839ebe9cb426c72d5f0bd549592e583c (diff)
Merge branch 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 mm updates from Ingo Molnar: "The main x86 MM changes in this cycle were: - continued native kernel PCID support preparation patches to the TLB flushing code (Andy Lutomirski) - various fixes related to 32-bit compat syscall returning address over 4Gb in applications, launched from 64-bit binaries - motivated by C/R frameworks such as Virtuozzo. (Dmitry Safonov) - continued Intel 5-level paging enablement: in particular the conversion of x86 GUP to the generic GUP code. (Kirill A. Shutemov) - x86/mpx ABI corner case fixes/enhancements (Joerg Roedel) - ... plus misc updates, fixes and cleanups" * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits) mm, zone_device: Replace {get, put}_zone_device_page() with a single reference to fix pmem crash x86/mm: Fix flush_tlb_page() on Xen x86/mm: Make flush_tlb_mm_range() more predictable x86/mm: Remove flush_tlb() and flush_tlb_current_task() x86/vm86/32: Switch to flush_tlb_mm_range() in mark_screen_rdonly() x86/mm/64: Fix crash in remove_pagetable() Revert "x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation" x86/boot/e820: Remove a redundant self assignment x86/mm: Fix dump pagetables for 4 levels of page tables x86/mpx, selftests: Only check bounds-vs-shadow when we keep shadow x86/mpx: Correctly report do_mpx_bt_fault() failures to user-space Revert "x86/mm/numa: Remove numa_nodemask_from_meminfo()" x86/espfix: Add support for 5-level paging x86/kasan: Extend KASAN to support 5-level paging x86/mm: Add basic defines/helpers for CONFIG_X86_5LEVEL=y x86/paravirt: Add 5-level support to the paravirt code x86/mm: Define virtual memory map for 5-level paging x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert x86/boot: Detect 5-level paging support x86/mm/numa: Remove numa_nodemask_from_meminfo() ...
Diffstat (limited to 'Documentation/x86')
-rw-r--r--Documentation/x86/x86_64/mm.txt36
1 files changed, 33 insertions, 3 deletions
diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 5724092db811..b0798e281aa6 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -4,7 +4,7 @@
4Virtual memory map with 4 level page tables: 4Virtual memory map with 4 level page tables:
5 5
60000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm 60000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
7hole caused by [48:63] sign extension 7hole caused by [47:63] sign extension
8ffff800000000000 - ffff87ffffffffff (=43 bits) guard hole, reserved for hypervisor 8ffff800000000000 - ffff87ffffffffff (=43 bits) guard hole, reserved for hypervisor
9ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory 9ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
10ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole 10ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
@@ -19,16 +19,43 @@ ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
19ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space 19ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
20... unused hole ... 20... unused hole ...
21ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0 21ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0
22ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space (variable)
23ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
24ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
25
26Virtual memory map with 5 level page tables:
27
280000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm
29hole caused by [56:63] sign extension
30ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
31ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
32ff90000000000000 - ff91ffffffffffff (=49 bits) hole
33ff92000000000000 - ffd1ffffffffffff (=54 bits) vmalloc/ioremap space
34ffd2000000000000 - ffd3ffffffffffff (=49 bits) hole
35ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
36... unused hole ...
37ffd8000000000000 - fff7ffffffffffff (=53 bits) kasan shadow memory (8PB)
38... unused hole ...
39ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
40... unused hole ...
41ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
42... unused hole ...
43ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0
22ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space 44ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space
23ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls 45ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
24ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole 46ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
25 47
48Architecture defines a 64-bit virtual address. Implementations can support
49less. Currently supported are 48- and 57-bit virtual addresses. Bits 63
50through to the most-significant implemented bit are set to either all ones
51or all zero. This causes hole between user space and kernel addresses.
52
26The direct mapping covers all memory in the system up to the highest 53The direct mapping covers all memory in the system up to the highest
27memory address (this means in some cases it can also include PCI memory 54memory address (this means in some cases it can also include PCI memory
28holes). 55holes).
29 56
30vmalloc space is lazily synchronized into the different PML4 pages of 57vmalloc space is lazily synchronized into the different PML4/PML5 pages of
31the processes using the page fault handler, with init_level4_pgt as 58the processes using the page fault handler, with init_top_pgt as
32reference. 59reference.
33 60
34Current X86-64 implementations support up to 46 bits of address space (64 TB), 61Current X86-64 implementations support up to 46 bits of address space (64 TB),
@@ -39,6 +66,9 @@ memory window (this size is arbitrary, it can be raised later if needed).
39The mappings are not part of any other kernel PGD and are only available 66The mappings are not part of any other kernel PGD and are only available
40during EFI runtime calls. 67during EFI runtime calls.
41 68
69The module mapping space size changes based on the CONFIG requirements for the
70following fixmap section.
71
42Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all 72Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
43physical memory, vmalloc/ioremap space and virtual memory map are randomized. 73physical memory, vmalloc/ioremap space and virtual memory map are randomized.
44Their order is preserved but their base will be offset early at boot time. 74Their order is preserved but their base will be offset early at boot time.