aboutsummaryrefslogtreecommitdiffstats
path: root/mm/vmstat.c
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2008-08-14 06:10:14 -0400
committerRussell King <rmk+kernel@arm.linux.org.uk>2008-08-27 15:09:28 -0400
commite80d6a248298721e0ec2cac150c539d8378577d8 (patch)
tree0ad2112037cc28e3faab41baf2b6ea1851748019 /mm/vmstat.c
parentf1bcf7e3e734ea8713e08fbc3409f8bf26ec418f (diff)
[ARM] Skip memory holes in FLATMEM when reading /proc/pagetypeinfo
Ordinarily, memory holes in flatmem still have a valid memmap and is safe to use. However, an architecture (ARM) frees up the memmap backing memory holes on the assumption it is never used. /proc/pagetypeinfo reads the whole range of pages in a zone believing that the memmap is valid and that pfn_valid will return false if it is not. On ARM, freeing the memmap breaks the page->zone linkages even though pfn_valid() returns true and the kernel can oops shortly afterwards due to accessing a bogus struct zone *. This patch lets architectures say when FLATMEM can have holes in the memmap. Rather than an expensive check for valid memory, /proc/pagetypeinfo will confirm that the page linkages are still valid by checking page->zone is still the expected zone. The lookup of page_zone is safe as there is a limited range of memory that is accessed when calling page_zone. Even if page_zone happens to return the correct zone, the impact is that the counters in /proc/pagetypeinfo are slightly off but fragmentation monitoring is unlikely to be relevant on an embedded system. Reported-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Mel Gorman <mel@csn.ul.ie> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'mm/vmstat.c')
-rw-r--r--mm/vmstat.c19
1 files changed, 18 insertions, 1 deletions
diff --git a/mm/vmstat.c b/mm/vmstat.c
index b0d08e667ece..d7826af2fb07 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -516,9 +516,26 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
516 continue; 516 continue;
517 517
518 page = pfn_to_page(pfn); 518 page = pfn_to_page(pfn);
519#ifdef CONFIG_ARCH_FLATMEM_HAS_HOLES
520 /*
521 * Ordinarily, memory holes in flatmem still have a valid
522 * memmap for the PFN range. However, an architecture for
523 * embedded systems (e.g. ARM) can free up the memmap backing
524 * holes to save memory on the assumption the memmap is
525 * never used. The page_zone linkages are then broken even
526 * though pfn_valid() returns true. Skip the page if the
527 * linkages are broken. Even if this test passed, the impact
528 * is that the counters for the movable type are off but
529 * fragmentation monitoring is likely meaningless on small
530 * systems.
531 */
532 if (page_zone(page) != zone)
533 continue;
534#endif
519 mtype = get_pageblock_migratetype(page); 535 mtype = get_pageblock_migratetype(page);
520 536
521 count[mtype]++; 537 if (mtype < MIGRATE_TYPES)
538 count[mtype]++;
522 } 539 }
523 540
524 /* Print counts */ 541 /* Print counts */