aboutsummaryrefslogtreecommitdiffstats
path: root/mm/filemap.c
diff options
context:
space:
mode:
authorKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>2010-05-24 17:31:48 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2010-05-25 11:06:56 -0400
commite9d6c157385e4efa61cb8293e425c9d8beba70d3 (patch)
treefca2452b46328c9005b8a4043a22b7b1b4d47d0c /mm/filemap.c
parent1f0a738868cbfe20ae53a00b7c302c04ef7ab8fc (diff)
tmpfs: insert tmpfs cache pages to inactive list at first
Shaohua Li reported parallel file copy on tmpfs can lead to OOM killer. This is regression of caused by commit 9ff473b9a7 ("vmscan: evict streaming IO first"). Wow, It is 2 years old patch! Currently, tmpfs file cache is inserted active list at first. This means that the insertion doesn't only increase numbers of pages in anon LRU, but it also reduces anon scanning ratio. Therefore, vmscan will get totally confused. It scans almost only file LRU even though the system has plenty unused tmpfs pages. Historically, lru_cache_add_active_anon() was used for two reasons. 1) Intend to priotize shmem page rather than regular file cache. 2) Intend to avoid reclaim priority inversion of used once pages. But we've lost both motivation because (1) Now we have separate anon and file LRU list. then, to insert active list doesn't help such priotize. (2) In past, one pte access bit will cause page activation. then to insert inactive list with pte access bit mean higher priority than to insert active list. Its priority inversion may lead to uninteded lru chun. but it was already solved by commit 645747462 (vmscan: detect mapped file pages used only once). (Thanks Hannes, you are great!) Thus, now we can use lru_cache_add_anon() instead. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reported-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Wu Fengguang <fengguang.wu@intel.com> Reviewed-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Rik van Riel <riel@redhat.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Henrique de Moraes Holschuh <hmh@hmh.eng.br> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r--mm/filemap.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/filemap.c b/mm/filemap.c
index 140ebda9640f..d6f4f073836e 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -441,7 +441,7 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
441 /* 441 /*
442 * Splice_read and readahead add shmem/tmpfs pages into the page cache 442 * Splice_read and readahead add shmem/tmpfs pages into the page cache
443 * before shmem_readpage has a chance to mark them as SwapBacked: they 443 * before shmem_readpage has a chance to mark them as SwapBacked: they
444 * need to go on the active_anon lru below, and mem_cgroup_cache_charge 444 * need to go on the anon lru below, and mem_cgroup_cache_charge
445 * (called in add_to_page_cache) needs to know where they're going too. 445 * (called in add_to_page_cache) needs to know where they're going too.
446 */ 446 */
447 if (mapping_cap_swap_backed(mapping)) 447 if (mapping_cap_swap_backed(mapping))
@@ -452,7 +452,7 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
452 if (page_is_file_cache(page)) 452 if (page_is_file_cache(page))
453 lru_cache_add_file(page); 453 lru_cache_add_file(page);
454 else 454 else
455 lru_cache_add_active_anon(page); 455 lru_cache_add_anon(page);
456 } 456 }
457 return ret; 457 return ret;
458} 458}