aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2010-05-24 17:32:17 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2010-05-25 11:06:58 -0400
commit3f6c82728f4e31a97c3a1b32abccb512fed0b573 (patch)
tree4b577e789a5daef91e40d10bc71c8134b3874ae8
parente325c90ffc13b698fa2814102e05275b21c26bec (diff)
mm: migration: take a reference to the anon_vma before migrating
This patchset is a memory compaction mechanism that reduces external fragmentation memory by moving GFP_MOVABLE pages to a fewer number of pageblocks. The term "compaction" was chosen as there are is a number of mechanisms that are not mutually exclusive that can be used to defragment memory. For example, lumpy reclaim is a form of defragmentation as was slub "defragmentation" (really a form of targeted reclaim). Hence, this is called "compaction" to distinguish it from other forms of defragmentation. In this implementation, a full compaction run involves two scanners operating within a zone - a migration and a free scanner. The migration scanner starts at the beginning of a zone and finds all movable pages within one pageblock_nr_pages-sized area and isolates them on a migratepages list. The free scanner begins at the end of the zone and searches on a per-area basis for enough free pages to migrate all the pages on the migratepages list. As each area is respectively migrated or exhausted of free pages, the scanners are advanced one area. A compaction run completes within a zone when the two scanners meet. This method is a bit primitive but is easy to understand and greater sophistication would require maintenance of counters on a per-pageblock basis. This would have a big impact on allocator fast-paths to improve compaction which is a poor trade-off. It also does not try relocate virtually contiguous pages to be physically contiguous. However, assuming transparent hugepages were in use, a hypothetical khugepaged might reuse compaction code to isolate free pages, split them and relocate userspace pages for promotion. Memory compaction can be triggered in one of three ways. It may be triggered explicitly by writing any value to /proc/sys/vm/compact_memory and compacting all of memory. It can be triggered on a per-node basis by writing any value to /sys/devices/system/node/nodeN/compact where N is the node ID to be compacted. When a process fails to allocate a high-order page, it may compact memory in an attempt to satisfy the allocation instead of entering direct reclaim. Explicit compaction does not finish until the two scanners meet and direct compaction ends if a suitable page becomes available that would meet watermarks. The series is in 14 patches. The first three are not "core" to the series but are important pre-requisites. Patch 1 reference counts anon_vma for rmap_walk_anon(). Without this patch, it's possible to use anon_vma after free if the caller is not holding a VMA or mmap_sem for the pages in question. While there should be no existing user that causes this problem, it's a requirement for memory compaction to be stable. The patch is at the start of the series for bisection reasons. Patch 2 merges the KSM and migrate counts. It could be merged with patch 1 but would be slightly harder to review. Patch 3 skips over unmapped anon pages during migration as there are no guarantees about the anon_vma existing. There is a window between when a page was isolated and migration started during which anon_vma could disappear. Patch 4 notes that PageSwapCache pages can still be migrated even if they are unmapped. Patch 5 allows CONFIG_MIGRATION to be set without CONFIG_NUMA Patch 6 exports a "unusable free space index" via debugfs. It's a measure of external fragmentation that takes the size of the allocation request into account. It can also be calculated from userspace so can be dropped if requested Patch 7 exports a "fragmentation index" which only has meaning when an allocation request fails. It determines if an allocation failure would be due to a lack of memory or external fragmentation. Patch 8 moves the definition for LRU isolation modes for use by compaction Patch 9 is the compaction mechanism although it's unreachable at this point Patch 10 adds a means of compacting all of memory with a proc trgger Patch 11 adds a means of compacting a specific node with a sysfs trigger Patch 12 adds "direct compaction" before "direct reclaim" if it is determined there is a good chance of success. Patch 13 adds a sysctl that allows tuning of the threshold at which the kernel will compact or direct reclaim Patch 14 temporarily disables compaction if an allocation failure occurs after compaction. Testing of compaction was in three stages. For the test, debugging, preempt, the sleep watchdog and lockdep were all enabled but nothing nasty popped out. min_free_kbytes was tuned as recommended by hugeadm to help fragmentation avoidance and high-order allocations. It was tested on X86, X86-64 and PPC64. Ths first test represents one of the easiest cases that can be faced for lumpy reclaim or memory compaction. 1. Machine freshly booted and configured for hugepage usage with a) hugeadm --create-global-mounts b) hugeadm --pool-pages-max DEFAULT:8G c) hugeadm --set-recommended-min_free_kbytes d) hugeadm --set-recommended-shmmax The min_free_kbytes here is important. Anti-fragmentation works best when pageblocks don't mix. hugeadm knows how to calculate a value that will significantly reduce the worst of external-fragmentation-related events as reported by the mm_page_alloc_extfrag tracepoint. 2. Load up memory a) Start updatedb b) Create in parallel a X files of pagesize*128 in size. Wait until files are created. By parallel, I mean that 4096 instances of dd were launched, one after the other using &. The crude objective being to mix filesystem metadata allocations with the buffer cache. c) Delete every second file so that pageblocks are likely to have holes d) kill updatedb if it's still running At this point, the system is quiet, memory is full but it's full with clean filesystem metadata and clean buffer cache that is unmapped. This is readily migrated or discarded so you'd expect lumpy reclaim to have no significant advantage over compaction but this is at the POC stage. 3. In increments, attempt to allocate 5% of memory as hugepages. Measure how long it took, how successful it was, how many direct reclaims took place and how how many compactions. Note the compaction figures might not fully add up as compactions can take place for orders other than the hugepage size X86 vanilla compaction Final page count 913 916 (attempted 1002) pages reclaimed 68296 9791 X86-64 vanilla compaction Final page count: 901 902 (attempted 1002) Total pages reclaimed: 112599 53234 PPC64 vanilla compaction Final page count: 93 94 (attempted 110) Total pages reclaimed: 103216 61838 There was not a dramatic improvement in success rates but it wouldn't be expected in this case either. What was important is that fewer pages were reclaimed in all cases reducing the amount of IO required to satisfy a huge page allocation. The second tests were all performance related - kernbench, netperf, iozone and sysbench. None showed anything too remarkable. The last test was a high-order allocation stress test. Many kernel compiles are started to fill memory with a pressured mix of unmovable and movable allocations. During this, an attempt is made to allocate 90% of memory as huge pages - one at a time with small delays between attempts to avoid flooding the IO queue. vanilla compaction Percentage of request allocated X86 98 99 Percentage of request allocated X86-64 95 98 Percentage of request allocated PPC64 55 70 This patch: rmap_walk_anon() does not use page_lock_anon_vma() for looking up and locking an anon_vma and it does not appear to have sufficient locking to ensure the anon_vma does not disappear from under it. This patch copies an approach used by KSM to take a reference on the anon_vma while pages are being migrated. This should prevent rmap_walk() running into nasty surprises later because anon_vma has been freed. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Acked-by: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--include/linux/rmap.h23
-rw-r--r--mm/migrate.c12
-rw-r--r--mm/rmap.c10
3 files changed, 40 insertions, 5 deletions
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index d25bd224d370..567d43f29a10 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -29,6 +29,9 @@ struct anon_vma {
29#ifdef CONFIG_KSM 29#ifdef CONFIG_KSM
30 atomic_t ksm_refcount; 30 atomic_t ksm_refcount;
31#endif 31#endif
32#ifdef CONFIG_MIGRATION
33 atomic_t migrate_refcount;
34#endif
32 /* 35 /*
33 * NOTE: the LSB of the head.next is set by 36 * NOTE: the LSB of the head.next is set by
34 * mm_take_all_locks() _after_ taking the above lock. So the 37 * mm_take_all_locks() _after_ taking the above lock. So the
@@ -81,6 +84,26 @@ static inline int ksm_refcount(struct anon_vma *anon_vma)
81 return 0; 84 return 0;
82} 85}
83#endif /* CONFIG_KSM */ 86#endif /* CONFIG_KSM */
87#ifdef CONFIG_MIGRATION
88static inline void migrate_refcount_init(struct anon_vma *anon_vma)
89{
90 atomic_set(&anon_vma->migrate_refcount, 0);
91}
92
93static inline int migrate_refcount(struct anon_vma *anon_vma)
94{
95 return atomic_read(&anon_vma->migrate_refcount);
96}
97#else
98static inline void migrate_refcount_init(struct anon_vma *anon_vma)
99{
100}
101
102static inline int migrate_refcount(struct anon_vma *anon_vma)
103{
104 return 0;
105}
106#endif /* CONFIG_MIGRATE */
84 107
85static inline struct anon_vma *page_anon_vma(struct page *page) 108static inline struct anon_vma *page_anon_vma(struct page *page)
86{ 109{
diff --git a/mm/migrate.c b/mm/migrate.c
index 5938db54e1d7..b768a1d4fa43 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -543,6 +543,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
543 int rcu_locked = 0; 543 int rcu_locked = 0;
544 int charge = 0; 544 int charge = 0;
545 struct mem_cgroup *mem = NULL; 545 struct mem_cgroup *mem = NULL;
546 struct anon_vma *anon_vma = NULL;
546 547
547 if (!newpage) 548 if (!newpage)
548 return -ENOMEM; 549 return -ENOMEM;
@@ -599,6 +600,8 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
599 if (PageAnon(page)) { 600 if (PageAnon(page)) {
600 rcu_read_lock(); 601 rcu_read_lock();
601 rcu_locked = 1; 602 rcu_locked = 1;
603 anon_vma = page_anon_vma(page);
604 atomic_inc(&anon_vma->migrate_refcount);
602 } 605 }
603 606
604 /* 607 /*
@@ -638,6 +641,15 @@ skip_unmap:
638 if (rc) 641 if (rc)
639 remove_migration_ptes(page, page); 642 remove_migration_ptes(page, page);
640rcu_unlock: 643rcu_unlock:
644
645 /* Drop an anon_vma reference if we took one */
646 if (anon_vma && atomic_dec_and_lock(&anon_vma->migrate_refcount, &anon_vma->lock)) {
647 int empty = list_empty(&anon_vma->head);
648 spin_unlock(&anon_vma->lock);
649 if (empty)
650 anon_vma_free(anon_vma);
651 }
652
641 if (rcu_locked) 653 if (rcu_locked)
642 rcu_read_unlock(); 654 rcu_read_unlock();
643uncharge: 655uncharge:
diff --git a/mm/rmap.c b/mm/rmap.c
index 0feeef860a8f..f522cb008646 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -250,7 +250,8 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain)
250 list_del(&anon_vma_chain->same_anon_vma); 250 list_del(&anon_vma_chain->same_anon_vma);
251 251
252 /* We must garbage collect the anon_vma if it's empty */ 252 /* We must garbage collect the anon_vma if it's empty */
253 empty = list_empty(&anon_vma->head) && !ksm_refcount(anon_vma); 253 empty = list_empty(&anon_vma->head) && !ksm_refcount(anon_vma) &&
254 !migrate_refcount(anon_vma);
254 spin_unlock(&anon_vma->lock); 255 spin_unlock(&anon_vma->lock);
255 256
256 if (empty) 257 if (empty)
@@ -275,6 +276,7 @@ static void anon_vma_ctor(void *data)
275 276
276 spin_lock_init(&anon_vma->lock); 277 spin_lock_init(&anon_vma->lock);
277 ksm_refcount_init(anon_vma); 278 ksm_refcount_init(anon_vma);
279 migrate_refcount_init(anon_vma);
278 INIT_LIST_HEAD(&anon_vma->head); 280 INIT_LIST_HEAD(&anon_vma->head);
279} 281}
280 282
@@ -1355,10 +1357,8 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
1355 /* 1357 /*
1356 * Note: remove_migration_ptes() cannot use page_lock_anon_vma() 1358 * Note: remove_migration_ptes() cannot use page_lock_anon_vma()
1357 * because that depends on page_mapped(); but not all its usages 1359 * because that depends on page_mapped(); but not all its usages
1358 * are holding mmap_sem, which also gave the necessary guarantee 1360 * are holding mmap_sem. Users without mmap_sem are required to
1359 * (that this anon_vma's slab has not already been destroyed). 1361 * take a reference count to prevent the anon_vma disappearing
1360 * This needs to be reviewed later: avoiding page_lock_anon_vma()
1361 * is risky, and currently limits the usefulness of rmap_walk().
1362 */ 1362 */
1363 anon_vma = page_anon_vma(page); 1363 anon_vma = page_anon_vma(page);
1364 if (!anon_vma) 1364 if (!anon_vma)