diff options
author | Mel Gorman <mel@csn.ul.ie> | 2011-01-13 18:45:56 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-01-13 20:32:33 -0500 |
commit | 3e7d344970673c5334cf7b5bb27c8c0942b06126 (patch) | |
tree | 832ecb4da5fd27efa5a503df5b96bfdee2a52ffd /mm/migrate.c | |
parent | ee64fc9354e515a79c7232cfde65c88ec627308b (diff) |
mm: vmscan: reclaim order-0 and use compaction instead of lumpy reclaim
Lumpy reclaim is disruptive. It reclaims a large number of pages and
ignores the age of the pages it reclaims. This can incur significant
stalls and potentially increase the number of major faults.
Compaction has reached the point where it is considered reasonably stable
(meaning it has passed a lot of testing) and is a potential candidate for
displacing lumpy reclaim. This patch introduces an alternative to lumpy
reclaim whe compaction is available called reclaim/compaction. The basic
operation is very simple - instead of selecting a contiguous range of
pages to reclaim, a number of order-0 pages are reclaimed and then
compaction is later by either kswapd (compact_zone_order()) or direct
compaction (__alloc_pages_direct_compact()).
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: use conventional task_struct naming]
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 6ae8a66a7045..94875b265928 100644 --- a/mm/migrate.c +++ b/mm/migrate.c | |||
@@ -639,6 +639,23 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, | |||
639 | if (!trylock_page(page)) { | 639 | if (!trylock_page(page)) { |
640 | if (!force) | 640 | if (!force) |
641 | goto move_newpage; | 641 | goto move_newpage; |
642 | |||
643 | /* | ||
644 | * It's not safe for direct compaction to call lock_page. | ||
645 | * For example, during page readahead pages are added locked | ||
646 | * to the LRU. Later, when the IO completes the pages are | ||
647 | * marked uptodate and unlocked. However, the queueing | ||
648 | * could be merging multiple pages for one bio (e.g. | ||
649 | * mpage_readpages). If an allocation happens for the | ||
650 | * second or third page, the process can end up locking | ||
651 | * the same page twice and deadlocking. Rather than | ||
652 | * trying to be clever about what pages can be locked, | ||
653 | * avoid the use of lock_page for direct compaction | ||
654 | * altogether. | ||
655 | */ | ||
656 | if (current->flags & PF_MEMALLOC) | ||
657 | goto move_newpage; | ||
658 | |||
642 | lock_page(page); | 659 | lock_page(page); |
643 | } | 660 | } |
644 | 661 | ||