aboutsummaryrefslogtreecommitdiffstats
path: root/mm/hugetlb.c
diff options
context:
space:
mode:
authorKen Chen <kenchen@google.com>2007-02-08 17:20:27 -0500
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-02-09 12:25:46 -0500
commit6649a3863232eb2e2f15ea6c622bd8ceacf96d76 (patch)
treec3b77d20afd1e7215186244375f6cdcaad94d4b2 /mm/hugetlb.c
parentf336953bfdee8d5e7f69cb8e080704546541f04b (diff)
[PATCH] hugetlb: preserve hugetlb pte dirty state
__unmap_hugepage_range() is buggy that it does not preserve dirty state of huge_pte when unmapping hugepage range. It causes data corruption in the event of dop_caches being used by sys admin. For example, an application creates a hugetlb file, modify pages, then unmap it. While leaving the hugetlb file alive, comes along sys admin doing a "echo 3 > /proc/sys/vm/drop_caches". drop_pagecache_sb() will happily free all pages that aren't marked dirty if there are no active mapping. Later when application remaps the hugetlb file back and all data are gone, triggering catastrophic flip over on application. Not only that, the internal resv_huge_pages count will also get all messed up. Fix it up by marking page dirty appropriately. Signed-off-by: Ken Chen <kenchen@google.com> Cc: "Nish Aravamudan" <nish.aravamudan@gmail.com> Cc: Adam Litke <agl@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: <stable@kernel.org> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r--mm/hugetlb.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cb362f761f1..36db012b38d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -389,6 +389,8 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
389 continue; 389 continue;
390 390
391 page = pte_page(pte); 391 page = pte_page(pte);
392 if (pte_dirty(pte))
393 set_page_dirty(page);
392 list_add(&page->lru, &page_list); 394 list_add(&page->lru, &page_list);
393 } 395 }
394 spin_unlock(&mm->page_table_lock); 396 spin_unlock(&mm->page_table_lock);