diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2010-10-26 17:21:42 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2010-10-26 19:52:07 -0400 |
commit | 7d3579e8e61937cbba268ea9b218d006b6d64221 (patch) | |
tree | 4fa1863641343eee551681d60a823a84a2611289 /include/trace | |
parent | bc57e00f5e0b2480ef222c775c49552d3a930db7 (diff) |
vmscan: narrow the scenarios in whcih lumpy reclaim uses synchrounous reclaim
shrink_page_list() can decide to give up reclaiming a page under a
number of conditions such as
1. trylock_page() failure
2. page is unevictable
3. zone reclaim and page is mapped
4. PageWriteback() is true
5. page is swapbacked and swap is full
6. add_to_swap() failure
7. page is dirty and gfpmask don't have GFP_IO, GFP_FS
8. page is pinned
9. IO queue is congested
10. pageout() start IO, but not finished
With lumpy reclaim, failures result in entering synchronous lumpy reclaim
but this can be unnecessary. In cases (2), (3), (5), (6), (7) and (8),
there is no point retrying. This patch causes lumpy reclaim to abort when
it is known it will fail.
Case (9) is more interesting. current behavior is,
1. start shrink_page_list(async)
2. found queue_congested()
3. skip pageout write
4. still start shrink_page_list(sync)
5. wait on a lot of pages
6. again, found queue_congested()
7. give up pageout write again
So, it's useless time wasting. However, just skipping page reclaim is
also notgood as x86 allocating a huge page needs 512 pages for example.
It can have more dirty pages than queue congestion threshold (~=128).
After this patch, pageout() behaves as follows;
- If order > PAGE_ALLOC_COSTLY_ORDER
Ignore queue congestion always.
- If order <= PAGE_ALLOC_COSTLY_ORDER
skip write page and disable lumpy reclaim.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/trace')
-rw-r--r-- | include/trace/events/vmscan.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index ecf952192a93..c255fcc587bf 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h | |||
@@ -25,13 +25,13 @@ | |||
25 | 25 | ||
26 | #define trace_reclaim_flags(page, sync) ( \ | 26 | #define trace_reclaim_flags(page, sync) ( \ |
27 | (page_is_file_cache(page) ? RECLAIM_WB_FILE : RECLAIM_WB_ANON) | \ | 27 | (page_is_file_cache(page) ? RECLAIM_WB_FILE : RECLAIM_WB_ANON) | \ |
28 | (sync == PAGEOUT_IO_SYNC ? RECLAIM_WB_SYNC : RECLAIM_WB_ASYNC) \ | 28 | (sync == LUMPY_MODE_SYNC ? RECLAIM_WB_SYNC : RECLAIM_WB_ASYNC) \ |
29 | ) | 29 | ) |
30 | 30 | ||
31 | #define trace_shrink_flags(file, sync) ( \ | 31 | #define trace_shrink_flags(file, sync) ( \ |
32 | (sync == PAGEOUT_IO_SYNC ? RECLAIM_WB_MIXED : \ | 32 | (sync == LUMPY_MODE_SYNC ? RECLAIM_WB_MIXED : \ |
33 | (file ? RECLAIM_WB_FILE : RECLAIM_WB_ANON)) | \ | 33 | (file ? RECLAIM_WB_FILE : RECLAIM_WB_ANON)) | \ |
34 | (sync == PAGEOUT_IO_SYNC ? RECLAIM_WB_SYNC : RECLAIM_WB_ASYNC) \ | 34 | (sync == LUMPY_MODE_SYNC ? RECLAIM_WB_SYNC : RECLAIM_WB_ASYNC) \ |
35 | ) | 35 | ) |
36 | 36 | ||
37 | TRACE_EVENT(mm_vmscan_kswapd_sleep, | 37 | TRACE_EVENT(mm_vmscan_kswapd_sleep, |