aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2009-01-06 17:38:54 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-06 18:58:58 -0500
commit3340289ddf29ca75c3acfb3a6b72f234b2f74d5c (patch)
treed5da94eb1cb0146160fcb0e7aa161bfa5b6ac807 /mm
parent08fba69986e20c1c9e5fe2e6064d146cc4f42480 (diff)
mm: report the MMU pagesize in /proc/pid/smaps
The KernelPageSize entry in /proc/pid/smaps is the pagesize used by the kernel to back a VMA. This matches the size used by the MMU in the majority of cases. However, one counter-example occurs on PPC64 kernels whereby a kernel using 64K as a base pagesize may still use 4K pages for the MMU on older processor. To distinguish, this patch reports MMUPageSize as the pagesize used by the MMU in /proc/pid/smaps. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Cc: "KOSAKI Motohiro" <kosaki.motohiro@jp.fujitsu.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/hugetlb.c13
1 files changed, 13 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5cb8bc7c80f7..9595278b5ab4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -236,6 +236,19 @@ unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
236} 236}
237 237
238/* 238/*
239 * Return the page size being used by the MMU to back a VMA. In the majority
240 * of cases, the page size used by the kernel matches the MMU size. On
241 * architectures where it differs, an architecture-specific version of this
242 * function is required.
243 */
244#ifndef vma_mmu_pagesize
245unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
246{
247 return vma_kernel_pagesize(vma);
248}
249#endif
250
251/*
239 * Flags for MAP_PRIVATE reservations. These are stored in the bottom 252 * Flags for MAP_PRIVATE reservations. These are stored in the bottom
240 * bits of the reservation map pointer, which are always clear due to 253 * bits of the reservation map pointer, which are always clear due to
241 * alignment. 254 * alignment.