aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/fault-armv.c
diff options
context:
space:
mode:
authorNitin Gupta <ngupta@vflare.org>2009-10-12 04:50:23 -0400
committerRussell King <rmk+kernel@arm.linux.org.uk>2009-10-12 12:52:26 -0400
commit787b2faadc4356b6c2c71feb42fb944fece9a12f (patch)
treee3acab624bb2de248a2e4f1e6293024200c8dc8c /arch/arm/mm/fault-armv.c
parentedc72786d208e77db94f84dcb0d166c0d23d82f7 (diff)
ARM: force dcache flush if dcache_dirty bit set
On ARM, update_mmu_cache() does dcache flush for a page only if it has a kernel mapping (page_mapping(page) != NULL). The correct behavior would be to force the flush based on dcache_dirty bit only. One of the cases where present logic would be a problem is when a RAM based block device[1] is used as a swap disk. In this case, we would have in-memory data corruption as shown in steps below: do_swap_page() { - Allocate a new page (if not already in swap cache) - Issue read from swap disk - Block driver issues flush_dcache_page() - flush_dcache_page() simply sets PG_dcache_dirty bit and does not actually issue a flush since this page has no user space mapping yet. - Now, if swap disk is almost full, this newly read page is removed from swap cache and corrsponding swap slot is freed. - Map this page anonymously in user space. - update_mmu_cache() - Since this page does not have kernel mapping (its not in page/swap cache and is mapped anonymously), it does not issue dcache flush even if dcache_dirty bit is set by flush_dcache_page() above. <user now gets stale data since dcache was never flushed> } Same problem exists on mips too. [1] example: - brd (RAM based block device) - ramzswap (RAM based compressed swap device) Signed-off-by: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/arm/mm/fault-armv.c')
-rw-r--r--arch/arm/mm/fault-armv.c9
1 files changed, 3 insertions, 6 deletions
diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index bc0099d5ae85..d0d17b6a3703 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -153,14 +153,11 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
153 153
154 page = pfn_to_page(pfn); 154 page = pfn_to_page(pfn);
155 mapping = page_mapping(page); 155 mapping = page_mapping(page);
156 if (mapping) {
157#ifndef CONFIG_SMP 156#ifndef CONFIG_SMP
158 int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags); 157 if (test_and_clear_bit(PG_dcache_dirty, &page->flags))
159 158 __flush_dcache_page(mapping, page);
160 if (dirty)
161 __flush_dcache_page(mapping, page);
162#endif 159#endif
163 160 if (mapping) {
164 if (cache_is_vivt()) 161 if (cache_is_vivt())
165 make_coherent(mapping, vma, addr, pfn); 162 make_coherent(mapping, vma, addr, pfn);
166 else if (vma->vm_flags & VM_EXEC) 163 else if (vma->vm_flags & VM_EXEC)