diff options
author | Tang Chen <tangchen@cn.fujitsu.com> | 2013-11-12 18:07:57 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-11-12 22:09:08 -0500 |
commit | 1402899e43fda490f08d2c47a7558931f8b9c60c (patch) | |
tree | 4ed77ee8fb1383069cda18202b02c30861d27a58 /mm/memblock.c | |
parent | 7aba842f08bb9e8485ff40067dc090ada4c7f575 (diff) |
mm/memblock.c: factor out of top-down allocation
[Problem]
The current Linux cannot migrate pages used by the kernel because of the
kernel direct mapping. In Linux kernel space, va = pa + PAGE_OFFSET.
When the pa is changed, we cannot simply update the pagetable and keep the
va unmodified. So the kernel pages are not migratable.
There are also some other issues will cause the kernel pages not
migratable. For example, the physical address may be cached somewhere and
will be used. It is not to update all the caches.
When doing memory hotplug in Linux, we first migrate all the pages in one
memory device somewhere else, and then remove the device. But if pages
are used by the kernel, they are not migratable. As a result, memory used
by the kernel cannot be hot-removed.
Modifying the kernel direct mapping mechanism is too difficult to do. And
it may cause the kernel performance down and unstable. So we use the
following way to do memory hotplug.
[What we are doing]
In Linux, memory in one numa node is divided into several zones. One of
the zones is ZONE_MOVABLE, which the kernel won't use.
In order to implement memory hotplug in Linux, we are going to arrange all
hotpluggable memory in ZONE_MOVABLE so that the kernel won't use these
memory. To do this, we need ACPI's help.
In ACPI, SRAT(System Resource Affinity Table) contains NUMA info. The
memory affinities in SRAT record every memory range in the system, and
also, flags specifying if the memory range is hotpluggable. (Please refer
to ACPI spec 5.0 5.2.16)
With the help of SRAT, we have to do the following two things to achieve our
goal:
1. When doing memory hot-add, allow the users arranging hotpluggable as
ZONE_MOVABLE.
(This has been done by the MOVABLE_NODE functionality in Linux.)
2. when the system is booting, prevent bootmem allocator from allocating
hotpluggable memory for the kernel before the memory initialization
finishes.
The problem 2 is the key problem we are going to solve. But before solving it,
we need some preparation. Please see below.
[Preparation]
Bootloader has to load the kernel image into memory. And this memory must
be unhotpluggable. We cannot prevent this anyway. So in a memory hotplug
system, we can assume any node the kernel resides in is not hotpluggable.
Before SRAT is parsed, we don't know which memory ranges are hotpluggable.
But memblock has already started to work. In the current kernel,
memblock allocates the following memory before SRAT is parsed:
setup_arch()
|->memblock_x86_fill() /* memblock is ready */
|......
|->early_reserve_e820_mpc_new() /* allocate memory under 1MB */
|->reserve_real_mode() /* allocate memory under 1MB */
|->init_mem_mapping() /* allocate page tables, about 2MB to map 1GB memory */
|->dma_contiguous_reserve() /* specified by user, should be low */
|->setup_log_buf() /* specified by user, several mega bytes */
|->relocate_initrd() /* could be large, but will be freed after boot, should reorder */
|->acpi_initrd_override() /* several mega bytes */
|->reserve_crashkernel() /* could be large, should reorder */
|......
|->initmem_init() /* Parse SRAT */
According to Tejun's advice, before SRAT is parsed, we should try our best
to allocate memory near the kernel image. Since the whole node the kernel
resides in won't be hotpluggable, and for a modern server, a node may have
at least 16GB memory, allocating several mega bytes memory around the
kernel image won't cross to hotpluggable memory.
[About this patchset]
So this patchset is the preparation for the problem 2 that we want to
solve. It does the following:
1. Make memblock be able to allocate memory bottom up.
1) Keep all the memblock APIs' prototype unmodified.
2) When the direction is bottom up, keep the start address greater than the
end of kernel image.
2. Improve init_mem_mapping() to support allocate page tables in
bottom up direction.
3. Introduce "movable_node" boot option to enable and disable this
functionality.
This patch (of 6):
Create a new function __memblock_find_range_top_down to factor out of
top-down allocation from memblock_find_in_range_node. This is a
preparation because we will introduce a new bottom-up allocation mode in
the following patch.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Toshi Kani <toshi.kani@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Thomas Renninger <trenn@suse.de>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Jiang Liu <jiang.liu@huawei.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memblock.c')
-rw-r--r-- | mm/memblock.c | 47 |
1 files changed, 34 insertions, 13 deletions
diff --git a/mm/memblock.c b/mm/memblock.c index 0ac412a0a7ee..accff1087137 100644 --- a/mm/memblock.c +++ b/mm/memblock.c | |||
@@ -83,33 +83,25 @@ static long __init_memblock memblock_overlaps_region(struct memblock_type *type, | |||
83 | } | 83 | } |
84 | 84 | ||
85 | /** | 85 | /** |
86 | * memblock_find_in_range_node - find free area in given range and node | 86 | * __memblock_find_range_top_down - find free area utility, in top-down |
87 | * @start: start of candidate range | 87 | * @start: start of candidate range |
88 | * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} | 88 | * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} |
89 | * @size: size of free area to find | 89 | * @size: size of free area to find |
90 | * @align: alignment of free area to find | 90 | * @align: alignment of free area to find |
91 | * @nid: nid of the free area to find, %MAX_NUMNODES for any node | 91 | * @nid: nid of the free area to find, %MAX_NUMNODES for any node |
92 | * | 92 | * |
93 | * Find @size free area aligned to @align in the specified range and node. | 93 | * Utility called from memblock_find_in_range_node(), find free area top-down. |
94 | * | 94 | * |
95 | * RETURNS: | 95 | * RETURNS: |
96 | * Found address on success, %0 on failure. | 96 | * Found address on success, %0 on failure. |
97 | */ | 97 | */ |
98 | phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t start, | 98 | static phys_addr_t __init_memblock |
99 | phys_addr_t end, phys_addr_t size, | 99 | __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end, |
100 | phys_addr_t align, int nid) | 100 | phys_addr_t size, phys_addr_t align, int nid) |
101 | { | 101 | { |
102 | phys_addr_t this_start, this_end, cand; | 102 | phys_addr_t this_start, this_end, cand; |
103 | u64 i; | 103 | u64 i; |
104 | 104 | ||
105 | /* pump up @end */ | ||
106 | if (end == MEMBLOCK_ALLOC_ACCESSIBLE) | ||
107 | end = memblock.current_limit; | ||
108 | |||
109 | /* avoid allocating the first page */ | ||
110 | start = max_t(phys_addr_t, start, PAGE_SIZE); | ||
111 | end = max(start, end); | ||
112 | |||
113 | for_each_free_mem_range_reverse(i, nid, &this_start, &this_end, NULL) { | 105 | for_each_free_mem_range_reverse(i, nid, &this_start, &this_end, NULL) { |
114 | this_start = clamp(this_start, start, end); | 106 | this_start = clamp(this_start, start, end); |
115 | this_end = clamp(this_end, start, end); | 107 | this_end = clamp(this_end, start, end); |
@@ -121,10 +113,39 @@ phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t start, | |||
121 | if (cand >= this_start) | 113 | if (cand >= this_start) |
122 | return cand; | 114 | return cand; |
123 | } | 115 | } |
116 | |||
124 | return 0; | 117 | return 0; |
125 | } | 118 | } |
126 | 119 | ||
127 | /** | 120 | /** |
121 | * memblock_find_in_range_node - find free area in given range and node | ||
122 | * @start: start of candidate range | ||
123 | * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} | ||
124 | * @size: size of free area to find | ||
125 | * @align: alignment of free area to find | ||
126 | * @nid: nid of the free area to find, %MAX_NUMNODES for any node | ||
127 | * | ||
128 | * Find @size free area aligned to @align in the specified range and node. | ||
129 | * | ||
130 | * RETURNS: | ||
131 | * Found address on success, %0 on failure. | ||
132 | */ | ||
133 | phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t start, | ||
134 | phys_addr_t end, phys_addr_t size, | ||
135 | phys_addr_t align, int nid) | ||
136 | { | ||
137 | /* pump up @end */ | ||
138 | if (end == MEMBLOCK_ALLOC_ACCESSIBLE) | ||
139 | end = memblock.current_limit; | ||
140 | |||
141 | /* avoid allocating the first page */ | ||
142 | start = max_t(phys_addr_t, start, PAGE_SIZE); | ||
143 | end = max(start, end); | ||
144 | |||
145 | return __memblock_find_range_top_down(start, end, size, align, nid); | ||
146 | } | ||
147 | |||
148 | /** | ||
128 | * memblock_find_in_range - find free area in given range | 149 | * memblock_find_in_range - find free area in given range |
129 | * @start: start of candidate range | 150 | * @start: start of candidate range |
130 | * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} | 151 | * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} |