diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-06-21 20:15:12 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-06-21 21:46:21 -0400 |
commit | c475a8ab625d567eacf5e30ec35d6d8704558062 (patch) | |
tree | 0971bef7b876f1b3eb160621fc2b61cb5313827b /mm/rmap.c | |
parent | d296e9cd02c92e576ecce5344026a4df4353cdb2 (diff) |
[PATCH] can_share_swap_page: use page_mapcount
Remember that ironic get_user_pages race? when the raised page_count on a
page swapped out led do_wp_page to decide that it had to copy on write, so
substituted a different page into userspace. 2.6.7 onwards have Andrea's
solution, where try_to_unmap_one backs out if it finds page_count raised.
Which works, but is unsatisfying (rmap.c has no other page_count heuristics),
and was found a few months ago to hang an intensive page migration test. A
year ago I was hesitant to engage page_mapcount, now it seems the right fix.
So remove the page_count hack from try_to_unmap_one; and use activate_page in
unuse_mm when dropping lock, to replace its secondary effect of helping
swapoff to make progress in that case.
Simplify can_share_swap_page (now called only on anonymous pages) to check
page_mapcount + page_swapcount == 1: still needs the page lock to stabilize
their (pessimistic) sum, but does not need swapper_space.tree_lock for that.
In do_swap_page, move swap_free and unlock_page below page_add_anon_rmap, to
keep sum on the high side, and correct when can_share_swap_page called.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/rmap.c')
-rw-r--r-- | mm/rmap.c | 21 |
1 files changed, 0 insertions, 21 deletions
@@ -539,27 +539,6 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma) | |||
539 | goto out_unmap; | 539 | goto out_unmap; |
540 | } | 540 | } |
541 | 541 | ||
542 | /* | ||
543 | * Don't pull an anonymous page out from under get_user_pages. | ||
544 | * GUP carefully breaks COW and raises page count (while holding | ||
545 | * page_table_lock, as we have here) to make sure that the page | ||
546 | * cannot be freed. If we unmap that page here, a user write | ||
547 | * access to the virtual address will bring back the page, but | ||
548 | * its raised count will (ironically) be taken to mean it's not | ||
549 | * an exclusive swap page, do_wp_page will replace it by a copy | ||
550 | * page, and the user never get to see the data GUP was holding | ||
551 | * the original page for. | ||
552 | * | ||
553 | * This test is also useful for when swapoff (unuse_process) has | ||
554 | * to drop page lock: its reference to the page stops existing | ||
555 | * ptes from being unmapped, so swapoff can make progress. | ||
556 | */ | ||
557 | if (PageSwapCache(page) && | ||
558 | page_count(page) != page_mapcount(page) + 2) { | ||
559 | ret = SWAP_FAIL; | ||
560 | goto out_unmap; | ||
561 | } | ||
562 | |||
563 | /* Nuke the page table entry. */ | 542 | /* Nuke the page table entry. */ |
564 | flush_cache_page(vma, address, page_to_pfn(page)); | 543 | flush_cache_page(vma, address, page_to_pfn(page)); |
565 | pteval = ptep_clear_flush(vma, address, pte); | 544 | pteval = ptep_clear_flush(vma, address, pte); |