aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2008-08-14 06:10:14 -0400
committerRussell King <rmk+kernel@arm.linux.org.uk>2008-08-27 15:09:28 -0400
commite80d6a248298721e0ec2cac150c539d8378577d8 (patch)
tree0ad2112037cc28e3faab41baf2b6ea1851748019
parentf1bcf7e3e734ea8713e08fbc3409f8bf26ec418f (diff)
[ARM] Skip memory holes in FLATMEM when reading /proc/pagetypeinfo
Ordinarily, memory holes in flatmem still have a valid memmap and is safe to use. However, an architecture (ARM) frees up the memmap backing memory holes on the assumption it is never used. /proc/pagetypeinfo reads the whole range of pages in a zone believing that the memmap is valid and that pfn_valid will return false if it is not. On ARM, freeing the memmap breaks the page->zone linkages even though pfn_valid() returns true and the kernel can oops shortly afterwards due to accessing a bogus struct zone *. This patch lets architectures say when FLATMEM can have holes in the memmap. Rather than an expensive check for valid memory, /proc/pagetypeinfo will confirm that the page linkages are still valid by checking page->zone is still the expected zone. The lookup of page_zone is safe as there is a limited range of memory that is accessed when calling page_zone. Even if page_zone happens to return the correct zone, the impact is that the counters in /proc/pagetypeinfo are slightly off but fragmentation monitoring is unlikely to be relevant on an embedded system. Reported-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Mel Gorman <mel@csn.ul.ie> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-rw-r--r--arch/arm/Kconfig5
-rw-r--r--mm/vmstat.c19
2 files changed, 23 insertions, 1 deletions
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 4b8acd2851f4..70dba1668907 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -810,6 +810,11 @@ config OABI_COMPAT
810 UNPREDICTABLE (in fact it can be predicted that it won't work 810 UNPREDICTABLE (in fact it can be predicted that it won't work
811 at all). If in doubt say Y. 811 at all). If in doubt say Y.
812 812
813config ARCH_FLATMEM_HAS_HOLES
814 bool
815 default y
816 depends on FLATMEM
817
813config ARCH_DISCONTIGMEM_ENABLE 818config ARCH_DISCONTIGMEM_ENABLE
814 bool 819 bool
815 default (ARCH_LH7A40X && !LH7A40X_CONTIGMEM) 820 default (ARCH_LH7A40X && !LH7A40X_CONTIGMEM)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index b0d08e667ece..d7826af2fb07 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -516,9 +516,26 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
516 continue; 516 continue;
517 517
518 page = pfn_to_page(pfn); 518 page = pfn_to_page(pfn);
519#ifdef CONFIG_ARCH_FLATMEM_HAS_HOLES
520 /*
521 * Ordinarily, memory holes in flatmem still have a valid
522 * memmap for the PFN range. However, an architecture for
523 * embedded systems (e.g. ARM) can free up the memmap backing
524 * holes to save memory on the assumption the memmap is
525 * never used. The page_zone linkages are then broken even
526 * though pfn_valid() returns true. Skip the page if the
527 * linkages are broken. Even if this test passed, the impact
528 * is that the counters for the movable type are off but
529 * fragmentation monitoring is likely meaningless on small
530 * systems.
531 */
532 if (page_zone(page) != zone)
533 continue;
534#endif
519 mtype = get_pageblock_migratetype(page); 535 mtype = get_pageblock_migratetype(page);
520 536
521 count[mtype]++; 537 if (mtype < MIGRATE_TYPES)
538 count[mtype]++;
522 } 539 }
523 540
524 /* Print counts */ 541 /* Print counts */