diff options
author | Jiang Liu <liuj97@gmail.com> | 2012-12-12 16:52:12 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-12 20:38:34 -0500 |
commit | 9feedc9d831e18ae6d0d15aa562e5e46ba53647b (patch) | |
tree | cb26ff54b0f02c4905772288b27f99b8b384ad6d /mm/bootmem.c | |
parent | c2d23f919bafcbc2259f5257d9a7d729802f0e3a (diff) |
mm: introduce new field "managed_pages" to struct zone
Currently a zone's present_pages is calcuated as below, which is
inaccurate and may cause trouble to memory hotplug.
spanned_pages - absent_pages - memmap_pages - dma_reserve.
During fixing bugs caused by inaccurate zone->present_pages, we found
zone->present_pages has been abused. The field zone->present_pages may
have different meanings in different contexts:
1) pages existing in a zone.
2) pages managed by the buddy system.
For more discussions about the issue, please refer to:
http://lkml.org/lkml/2012/11/5/866
https://patchwork.kernel.org/patch/1346751/
This patchset tries to introduce a new field named "managed_pages" to
struct zone, which counts "pages managed by the buddy system". And revert
zone->present_pages to count "physical pages existing in a zone", which
also keep in consistence with pgdat->node_present_pages.
We will set an initial value for zone->managed_pages in function
free_area_init_core() and will adjust it later if the initial value is
inaccurate.
For DMA/normal zones, the initial value is set to:
(spanned_pages - absent_pages - memmap_pages - dma_reserve)
Later zone->managed_pages will be adjusted to the accurate value when the
bootmem allocator frees all free pages to the buddy system in function
free_all_bootmem_node() and free_all_bootmem().
The bootmem allocator doesn't touch highmem pages, so highmem zones'
managed_pages is set to the accurate value "spanned_pages - absent_pages"
in function free_area_init_core() and won't be updated anymore.
This patch also adds a new field "managed_pages" to /proc/zoneinfo
and sysrq showmem.
[akpm@linux-foundation.org: small comment tweaks]
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Maciej Rutecki <maciej.rutecki@gmail.com>
Tested-by: Chris Clayton <chris2553@googlemail.com>
Cc: "Rafael J . Wysocki" <rjw@sisk.pl>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Minchan Kim <minchan@kernel.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/bootmem.c')
-rw-r--r-- | mm/bootmem.c | 21 |
1 files changed, 21 insertions, 0 deletions
diff --git a/mm/bootmem.c b/mm/bootmem.c index 26d057a8b552..19262ac05dd2 100644 --- a/mm/bootmem.c +++ b/mm/bootmem.c | |||
@@ -229,6 +229,22 @@ static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) | |||
229 | return count; | 229 | return count; |
230 | } | 230 | } |
231 | 231 | ||
232 | static void reset_node_lowmem_managed_pages(pg_data_t *pgdat) | ||
233 | { | ||
234 | struct zone *z; | ||
235 | |||
236 | /* | ||
237 | * In free_area_init_core(), highmem zone's managed_pages is set to | ||
238 | * present_pages, and bootmem allocator doesn't allocate from highmem | ||
239 | * zones. So there's no need to recalculate managed_pages because all | ||
240 | * highmem pages will be managed by the buddy system. Here highmem | ||
241 | * zone also includes highmem movable zone. | ||
242 | */ | ||
243 | for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) | ||
244 | if (!is_highmem(z)) | ||
245 | z->managed_pages = 0; | ||
246 | } | ||
247 | |||
232 | /** | 248 | /** |
233 | * free_all_bootmem_node - release a node's free pages to the buddy allocator | 249 | * free_all_bootmem_node - release a node's free pages to the buddy allocator |
234 | * @pgdat: node to be released | 250 | * @pgdat: node to be released |
@@ -238,6 +254,7 @@ static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) | |||
238 | unsigned long __init free_all_bootmem_node(pg_data_t *pgdat) | 254 | unsigned long __init free_all_bootmem_node(pg_data_t *pgdat) |
239 | { | 255 | { |
240 | register_page_bootmem_info_node(pgdat); | 256 | register_page_bootmem_info_node(pgdat); |
257 | reset_node_lowmem_managed_pages(pgdat); | ||
241 | return free_all_bootmem_core(pgdat->bdata); | 258 | return free_all_bootmem_core(pgdat->bdata); |
242 | } | 259 | } |
243 | 260 | ||
@@ -250,6 +267,10 @@ unsigned long __init free_all_bootmem(void) | |||
250 | { | 267 | { |
251 | unsigned long total_pages = 0; | 268 | unsigned long total_pages = 0; |
252 | bootmem_data_t *bdata; | 269 | bootmem_data_t *bdata; |
270 | struct pglist_data *pgdat; | ||
271 | |||
272 | for_each_online_pgdat(pgdat) | ||
273 | reset_node_lowmem_managed_pages(pgdat); | ||
253 | 274 | ||
254 | list_for_each_entry(bdata, &bdata_list, list) | 275 | list_for_each_entry(bdata, &bdata_list, list) |
255 | total_pages += free_all_bootmem_core(bdata); | 276 | total_pages += free_all_bootmem_core(bdata); |