aboutsummaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorLee Schermerhorn <Lee.Schermerhorn@hp.com>2008-10-18 23:26:42 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2008-10-20 11:50:26 -0400
commitba9ddf49391645e6bb93219131a40446538a5e76 (patch)
tree2202525ca36c6f629685f5fea60b5b3ba335f546 /mm/vmscan.c
parent7b854121eb3e5ba0241882ff939e2c485228c9c5 (diff)
Ramfs and Ram Disk pages are unevictable
Christoph Lameter pointed out that ram disk pages also clutter the LRU lists. When vmscan finds them dirty and tries to clean them, the ram disk writeback function just redirties the page so that it goes back onto the active list. Round and round she goes... With the ram disk driver [rd.c] replaced by the newer 'brd.c', this is no longer the case, as ram disk pages are no longer maintained on the lru. [This makes them unmigratable for defrag or memory hot remove, but that can be addressed by a separate patch series.] However, the ramfs pages behave like ram disk pages used to, so: Define new address_space flag [shares address_space flags member with mapping's gfp mask] to indicate that the address space contains all unevictable pages. This will provide for efficient testing of ramfs pages in page_evictable(). Also provide wrapper functions to set/test the unevictable state to minimize #ifdefs in ramfs driver and any other users of this facility. Set the unevictable state on address_space structures for new ramfs inodes. Test the unevictable state in page_evictable() to cull unevictable pages. These changes depend on [CONFIG_]UNEVICTABLE_LRU. [riel@redhat.com: undo the brd.c part] Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Rik van Riel <riel@redhat.com> Debugged-by: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2804d23e2da7..9babfbc1ddc8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2332,11 +2332,16 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
2332 * lists vs unevictable list. 2332 * lists vs unevictable list.
2333 * 2333 *
2334 * Reasons page might not be evictable: 2334 * Reasons page might not be evictable:
2335 * (1) page's mapping marked unevictable
2336 *
2335 * TODO - later patches 2337 * TODO - later patches
2336 */ 2338 */
2337int page_evictable(struct page *page, struct vm_area_struct *vma) 2339int page_evictable(struct page *page, struct vm_area_struct *vma)
2338{ 2340{
2339 2341
2342 if (mapping_unevictable(page_mapping(page)))
2343 return 0;
2344
2340 /* TODO: test page [!]evictable conditions */ 2345 /* TODO: test page [!]evictable conditions */
2341 2346
2342 return 1; 2347 return 1;