aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorAdam Litke <agl@us.ibm.com>2008-09-02 17:35:38 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2008-09-02 22:21:37 -0400
commit344c790e3821dac37eb742ddd0b611a300f78b9a (patch)
treeaf7aad0ef87e320c9bfa84c9f14b32c095c0918c /mm
parent169ccbd44eb20f5bb7e4352451eba25397e29749 (diff)
mm: make setup_zone_migrate_reserve() aware of overlapping nodes
I have gotten to the root cause of the hugetlb badness I reported back on August 15th. My system has the following memory topology (note the overlapping node): Node 0 Memory: 0x8000000-0x44000000 Node 1 Memory: 0x0-0x8000000 0x44000000-0x80000000 setup_zone_migrate_reserve() scans the address range 0x0-0x8000000 looking for a pageblock to move onto the MIGRATE_RESERVE list. Finding no candidates, it happily continues the scan into 0x8000000-0x44000000. When a pageblock is found, the pages are moved to the MIGRATE_RESERVE list on the wrong zone. Oops. setup_zone_migrate_reserve() should skip pageblocks in overlapping nodes. Signed-off-by: Adam Litke <agl@us.ibm.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_alloc.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index af982f7cdb2a..feb791648a1f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -694,6 +694,9 @@ static int move_freepages(struct zone *zone,
694#endif 694#endif
695 695
696 for (page = start_page; page <= end_page;) { 696 for (page = start_page; page <= end_page;) {
697 /* Make sure we are not inadvertently changing nodes */
698 VM_BUG_ON(page_to_nid(page) != zone_to_nid(zone));
699
697 if (!pfn_valid_within(page_to_pfn(page))) { 700 if (!pfn_valid_within(page_to_pfn(page))) {
698 page++; 701 page++;
699 continue; 702 continue;
@@ -2516,6 +2519,10 @@ static void setup_zone_migrate_reserve(struct zone *zone)
2516 continue; 2519 continue;
2517 page = pfn_to_page(pfn); 2520 page = pfn_to_page(pfn);
2518 2521
2522 /* Watch out for overlapping nodes */
2523 if (page_to_nid(page) != zone_to_nid(zone))
2524 continue;
2525
2519 /* Blocks with reserved pages will never free, skip them. */ 2526 /* Blocks with reserved pages will never free, skip them. */
2520 if (PageReserved(page)) 2527 if (PageReserved(page))
2521 continue; 2528 continue;