aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2009-03-31 18:23:33 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2009-04-01 11:59:15 -0400
commit9fab5619bdd7f84cdd22cc760778f759f9819a33 (patch)
tree4a7010f3ba43896c266032f8b287d6bd55b72c85
parent327c0e968645f2601a43f5ea7c19c7b3a5fa0a34 (diff)
shmem: writepage directly to swap
Synopsis: if shmem_writepage calls swap_writepage directly, most shmem swap loads benefit, and a catastrophic interaction between SLUB and some flash storage is avoided. shmem_writepage() has always been peculiar in making no attempt to write: it has just transferred a shmem page from file cache to swap cache, then let that page make its way around the LRU again before being written and freed. The idea was that people use tmpfs because they want those pages to stay in RAM; so although we give it an overflow to swap, we should resist writing too soon, giving those pages a second chance before they can be reclaimed. That was always questionable, and I've toyed with this patch for years; but never had a clear justification to depart from the original design. It became more questionable in 2.6.28, when the split LRU patches classed shmem and tmpfs pages as SwapBacked rather than as file_cache: that in itself gives them more resistance to reclaim than normal file pages. I prepared this patch for 2.6.29, but the merge window arrived before I'd completed gathering statistics to justify sending it in. Then while comparing SLQB against SLUB, running SLUB on a laptop I'd habitually used with SLAB, I found SLUB to run my tmpfs kbuild swapping tests five times slower than SLAB or SLQB - other machines slower too, but nowhere near so bad. Simpler "cp -a" swapping tests showed the same. slub_max_order=0 brings sanity to all, but heavy swapping is too far from normal to justify such a tuning. The crucial factor on that laptop turns out to be that I'm using an SD card for swap. What happens is this: By default, SLUB uses order-2 pages for shmem_inode_cache (and many other fs inodes), so creating tmpfs files under memory pressure brings lumpy reclaim into play. One subpage of the order is chosen from the bottom of the LRU as usual, then the other three picked out from their random positions on the LRUs. In a tmpfs load, many of these pages will be ones which already passed through shmem_writepage, so already have swap allocated. And though their offsets on swap were probably allocated sequentially, now that the pages are picked off at random, their swap offsets are scattered. But the flash storage on the SD card is very sensitive to having its writes merged: once swap is written at scattered offsets, performance falls apart. Rotating disk seeks increase too, but less disastrously. So: stop giving shmem/tmpfs pages a second pass around the LRU, write them out to swap as soon as their swap has been allocated. It's surely possible to devise an artificial load which runs faster the old way, one whose sizing is such that the tmpfs pages on their second pass are the ones that are wanted again, and other pages not. But I've not yet found such a load: on all machines, under the loads I've tried, immediate swap_writepage speeds up shmem swapping: especially when using the SLUB allocator (and more effectively than slub_max_order=0), but also with the others; and it also reduces the variance between runs. How much faster varies widely: a factor of five is rare, 5% is common. One load which might have suffered: imagine a swapping shmem load in a limited mem_cgroup on a machine with plenty of memory. Before 2.6.29 the swapcache was not charged, and such a load would have run quickest with the shmem swapcache never written to swap. But now swapcache is charged, so even this load benefits from shmem_writepage directly to swap. Apologies for the #ifndef CONFIG_SWAP swap_writepage() stub in swap.h: it's silly because that will never get called; but refactoring shmem.c sensibly according to CONFIG_SWAP will be a separate task. Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--include/linux/swap.h5
-rw-r--r--mm/shmem.c3
2 files changed, 6 insertions, 2 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b8b0c4ce83e6..62d81435347a 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -382,6 +382,11 @@ static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask,
382 return NULL; 382 return NULL;
383} 383}
384 384
385static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
386{
387 return 0;
388}
389
385static inline struct page *lookup_swap_cache(swp_entry_t swp) 390static inline struct page *lookup_swap_cache(swp_entry_t swp)
386{ 391{
387 return NULL; 392 return NULL;
diff --git a/mm/shmem.c b/mm/shmem.c
index 7ec78e24a30d..d94d2e9146bc 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1068,8 +1068,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
1068 swap_duplicate(swap); 1068 swap_duplicate(swap);
1069 BUG_ON(page_mapped(page)); 1069 BUG_ON(page_mapped(page));
1070 page_cache_release(page); /* pagecache ref */ 1070 page_cache_release(page); /* pagecache ref */
1071 set_page_dirty(page); 1071 swap_writepage(page, wbc);
1072 unlock_page(page);
1073 if (inode) { 1072 if (inode) {
1074 mutex_lock(&shmem_swaplist_mutex); 1073 mutex_lock(&shmem_swaplist_mutex);
1075 /* move instead of add in case we're racing */ 1074 /* move instead of add in case we're racing */