diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2012-12-12 16:51:09 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-12 20:38:32 -0500 |
commit | d8a8e1f0da3d29d7268b3300c96a059d63901b76 (patch) | |
tree | 46bd195f1cf4fd66c87f7d16c6bf043ad7fda9ff /mm/huge_memory.c | |
parent | 97ae17497e996ff09bf97b6db3b33f7fd4029092 (diff) |
thp, vmstat: implement HZP_ALLOC and HZP_ALLOC_FAILED events
hzp_alloc is incremented every time a huge zero page is successfully
allocated. It includes allocations which where dropped due
race with other allocation. Note, it doesn't count every map
of the huge zero page, only its allocation.
hzp_alloc_failed is incremented if kernel fails to allocate huge zero
page and falls back to using small pages.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@linux.intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/huge_memory.c')
-rw-r--r-- | mm/huge_memory.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d89220cb1d9f..9a5d45dfad44 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c | |||
@@ -184,8 +184,11 @@ retry: | |||
184 | 184 | ||
185 | zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE, | 185 | zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE, |
186 | HPAGE_PMD_ORDER); | 186 | HPAGE_PMD_ORDER); |
187 | if (!zero_page) | 187 | if (!zero_page) { |
188 | count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); | ||
188 | return 0; | 189 | return 0; |
190 | } | ||
191 | count_vm_event(THP_ZERO_PAGE_ALLOC); | ||
189 | preempt_disable(); | 192 | preempt_disable(); |
190 | if (cmpxchg(&huge_zero_pfn, 0, page_to_pfn(zero_page))) { | 193 | if (cmpxchg(&huge_zero_pfn, 0, page_to_pfn(zero_page))) { |
191 | preempt_enable(); | 194 | preempt_enable(); |