diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-07-04 12:59:24 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-07-04 13:40:04 -0400 |
commit | 251b97f552b1ad414cc5a9ccc8e4e94503edd5fc (patch) | |
tree | 5d7559154edb8eb2069f39b6be99ffc2484580cd /mm | |
parent | cde53535991fbb5c34a1566f25955297c1487b8d (diff) |
mm: dirty page accounting vs VM_MIXEDMAP
Dirty page accounting accurately measures the amound of dirty pages in
writable shared mappings by mapping the pages RO (as indicated by
vma_wants_writenotify). We then trap on first write and call
set_page_dirty() on the page, after which we map the page RW and
continue execution.
When we launder dirty pages, we call clear_page_dirty_for_io() which
clears both the dirty flag, and maps the page RO again before we start
writeout so that the story can repeat itself.
vma_wants_writenotify() excludes VM_PFNMAP on the basis that we cannot
do the regular dirty page stuff on raw PFNs and the memory isn't going
anywhere anyway.
The recently introduced VM_MIXEDMAP mixes both !pfn_valid() and
pfn_valid() pages in a single mapping.
We can't do dirty page accounting on !pfn_valid() pages as stated
above, and mapping them RO causes them to be COW'ed on write, which
breaks VM_SHARED semantics.
Excluding VM_MIXEDMAP in vma_wants_writenotify() would mean we don't do
the regular dirty page accounting for the pfn_valid() pages, which
would bring back all the head-aches from inaccurate dirty page
accounting.
So instead, we let the !pfn_valid() pages get mapped RO, but fix them
up unconditionally in the fault path.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Acked-by: Hugh Dickins <hugh@veritas.com>
Cc: "Jared Hulbert" <jaredeh@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index d14b251a25a6..350e646032f5 100644 --- a/mm/memory.c +++ b/mm/memory.c | |||
@@ -1697,8 +1697,19 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
1697 | struct page *dirty_page = NULL; | 1697 | struct page *dirty_page = NULL; |
1698 | 1698 | ||
1699 | old_page = vm_normal_page(vma, address, orig_pte); | 1699 | old_page = vm_normal_page(vma, address, orig_pte); |
1700 | if (!old_page) | 1700 | if (!old_page) { |
1701 | /* | ||
1702 | * VM_MIXEDMAP !pfn_valid() case | ||
1703 | * | ||
1704 | * We should not cow pages in a shared writeable mapping. | ||
1705 | * Just mark the pages writable as we can't do any dirty | ||
1706 | * accounting on raw pfn maps. | ||
1707 | */ | ||
1708 | if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == | ||
1709 | (VM_WRITE|VM_SHARED)) | ||
1710 | goto reuse; | ||
1701 | goto gotten; | 1711 | goto gotten; |
1712 | } | ||
1702 | 1713 | ||
1703 | /* | 1714 | /* |
1704 | * Take out anonymous pages first, anonymous shared vmas are | 1715 | * Take out anonymous pages first, anonymous shared vmas are |
@@ -1751,6 +1762,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, | |||
1751 | } | 1762 | } |
1752 | 1763 | ||
1753 | if (reuse) { | 1764 | if (reuse) { |
1765 | reuse: | ||
1754 | flush_cache_page(vma, address, pte_pfn(orig_pte)); | 1766 | flush_cache_page(vma, address, pte_pfn(orig_pte)); |
1755 | entry = pte_mkyoung(orig_pte); | 1767 | entry = pte_mkyoung(orig_pte); |
1756 | entry = maybe_mkwrite(pte_mkdirty(entry), vma); | 1768 | entry = maybe_mkwrite(pte_mkdirty(entry), vma); |