diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2007-07-26 13:41:07 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-07-26 14:35:17 -0400 |
commit | dc386d4d1e98bb39fb967ee156cd456c802fc692 (patch) | |
tree | ddd26eb0f08611a84157e4f8e1537a5127b96ea0 /mm/migrate.c | |
parent | 098284020c47c1212d211e39ae2b41c21182e056 (diff) |
memory unplug: migration by kernel
In usual, migrate_pages(page,,) is called with holding mm->sem by system call.
(mm here is a mm_struct which maps the migration target page.)
This semaphore helps avoiding some race conditions.
But, if we want to migrate a page by some kernel codes, we have to avoid
some races. This patch adds check code for following race condition.
1. A page which page->mapping==NULL can be target of migration. Then, we have
to check page->mapping before calling try_to_unmap().
2. anon_vma can be freed while page is unmapped, but page->mapping remains as
it was. We drop page->mapcount to be 0. Then we cannot trust page->mapping.
So, use rcu_read_lock() to prevent anon_vma pointed by page->mapping from
being freed during migration.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 21 |
1 files changed, 19 insertions, 2 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 34d8ada053e4..c8d87221f368 100644 --- a/mm/migrate.c +++ b/mm/migrate.c | |||
@@ -632,18 +632,35 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, | |||
632 | goto unlock; | 632 | goto unlock; |
633 | wait_on_page_writeback(page); | 633 | wait_on_page_writeback(page); |
634 | } | 634 | } |
635 | |||
636 | /* | 635 | /* |
637 | * Establish migration ptes or remove ptes | 636 | * By try_to_unmap(), page->mapcount goes down to 0 here. In this case, |
637 | * we cannot notice that anon_vma is freed while we migrates a page. | ||
638 | * This rcu_read_lock() delays freeing anon_vma pointer until the end | ||
639 | * of migration. File cache pages are no problem because of page_lock() | ||
640 | */ | ||
641 | rcu_read_lock(); | ||
642 | /* | ||
643 | * This is a corner case handling. | ||
644 | * When a new swap-cache is read into, it is linked to LRU | ||
645 | * and treated as swapcache but has no rmap yet. | ||
646 | * Calling try_to_unmap() against a page->mapping==NULL page is | ||
647 | * BUG. So handle it here. | ||
638 | */ | 648 | */ |
649 | if (!page->mapping) | ||
650 | goto rcu_unlock; | ||
651 | /* Establish migration ptes or remove ptes */ | ||
639 | try_to_unmap(page, 1); | 652 | try_to_unmap(page, 1); |
653 | |||
640 | if (!page_mapped(page)) | 654 | if (!page_mapped(page)) |
641 | rc = move_to_new_page(newpage, page); | 655 | rc = move_to_new_page(newpage, page); |
642 | 656 | ||
643 | if (rc) | 657 | if (rc) |
644 | remove_migration_ptes(page, page); | 658 | remove_migration_ptes(page, page); |
659 | rcu_unlock: | ||
660 | rcu_read_unlock(); | ||
645 | 661 | ||
646 | unlock: | 662 | unlock: |
663 | |||
647 | unlock_page(page); | 664 | unlock_page(page); |
648 | 665 | ||
649 | if (rc != -EAGAIN) { | 666 | if (rc != -EAGAIN) { |