diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2009-09-21 20:01:33 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-09-22 10:17:27 -0400 |
commit | 4b02108ac1b3354a22b0d83c684797692efdc395 (patch) | |
tree | 9f65d6e8e35ddce940e7b9da6305cf5a19e5904e /mm/migrate.c | |
parent | c6a7f5728a1db45d30df55a01adc130b4ab0327c (diff) |
mm: oom analysis: add shmem vmstat
Recently we encountered OOM problems due to memory use of the GEM cache.
Generally a large amuont of Shmem/Tmpfs pages tend to create a memory
shortage problem.
We often use the following calculation to determine the amount of shmem
pages:
shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
however the expression does not consider isolated and mlocked pages.
This patch adds explicit accounting for pages used by shmem and tmpfs.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 0edeac91348d..37143b924484 100644 --- a/mm/migrate.c +++ b/mm/migrate.c | |||
@@ -312,7 +312,10 @@ static int migrate_page_move_mapping(struct address_space *mapping, | |||
312 | */ | 312 | */ |
313 | __dec_zone_page_state(page, NR_FILE_PAGES); | 313 | __dec_zone_page_state(page, NR_FILE_PAGES); |
314 | __inc_zone_page_state(newpage, NR_FILE_PAGES); | 314 | __inc_zone_page_state(newpage, NR_FILE_PAGES); |
315 | 315 | if (PageSwapBacked(page)) { | |
316 | __dec_zone_page_state(page, NR_SHMEM); | ||
317 | __inc_zone_page_state(newpage, NR_SHMEM); | ||
318 | } | ||
316 | spin_unlock_irq(&mapping->tree_lock); | 319 | spin_unlock_irq(&mapping->tree_lock); |
317 | 320 | ||
318 | return 0; | 321 | return 0; |