aboutsummaryrefslogtreecommitdiffstats
path: root/mm/memory.c
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2008-02-05 01:28:42 -0500
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2008-02-05 12:44:14 -0500
commit02098feaa42b2e0087fbbe6c6ab9a23e4653b16a (patch)
tree494eaf13f204c9384d4316202fd76cd1b5d960ad /mm/memory.c
parent46017e954826ac59e91df76341a3f76b45467847 (diff)
swapin needs gfp_mask for loop on tmpfs
Building in a filesystem on a loop device on a tmpfs file can hang when swapping, the loop thread caught in that infamous throttle_vm_writeout. In theory this is a long standing problem, which I've either never seen in practice, or long ago suppressed the recollection, after discounting my load and my tmpfs size as unrealistically high. But now, with the new aops, it has become easy to hang on one machine. Loop used to grab_cache_page before the old prepare_write to tmpfs, which seems to have been enough to free up some memory for any swapin needed; but the new write_begin lets tmpfs find or allocate the page (much nicer, since grab_cache_page missed tmpfs pages in swapcache). When allocating a fresh page, tmpfs respects loop's mapping_gfp_mask, which has __GFP_IO|__GFP_FS stripped off, and throttle_vm_writeout is designed to break out when __GFP_IO or GFP_FS is unset; but when tmfps swaps in, read_swap_cache_async allocates with GFP_HIGHUSER_MOVABLE regardless of the mapping_gfp_mask - hence the hang. So, pass gfp_mask down the line from shmem_getpage to shmem_swapin to swapin_readahead to read_swap_cache_async to add_to_swap_cache. Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r--mm/memory.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index ccc9403d5352..bc137751da7f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2007,7 +2007,8 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
2007 page = lookup_swap_cache(entry); 2007 page = lookup_swap_cache(entry);
2008 if (!page) { 2008 if (!page) {
2009 grab_swap_token(); /* Contend for token _before_ read-in */ 2009 grab_swap_token(); /* Contend for token _before_ read-in */
2010 page = swapin_readahead(entry, vma, address); 2010 page = swapin_readahead(entry,
2011 GFP_HIGHUSER_MOVABLE, vma, address);
2011 if (!page) { 2012 if (!page) {
2012 /* 2013 /*
2013 * Back out if somebody else faulted in this pte 2014 * Back out if somebody else faulted in this pte