aboutsummaryrefslogtreecommitdiffstats
path: root/arch/xtensa
diff options
context:
space:
mode:
authorRussell King <rmk+kernel@arm.linux.org.uk>2009-12-18 11:40:18 -0500
committerRussell King <rmk+kernel@arm.linux.org.uk>2010-02-20 11:41:46 -0500
commit4b3073e1c53a256275f1079c0fbfbe85883d9275 (patch)
treea0fa98cb75edbbc58c43bbe38ac4c6da0913ae6d /arch/xtensa
parented42acaef1a9d51631a31b55e9ed52d400430492 (diff)
MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/xtensa')
-rw-r--r--arch/xtensa/include/asm/pgtable.h2
-rw-r--r--arch/xtensa/mm/cache.c4
2 files changed, 3 insertions, 3 deletions
diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index a138770c358e..76bf35554117 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -394,7 +394,7 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
394#define kern_addr_valid(addr) (1) 394#define kern_addr_valid(addr) (1)
395 395
396extern void update_mmu_cache(struct vm_area_struct * vma, 396extern void update_mmu_cache(struct vm_area_struct * vma,
397 unsigned long address, pte_t pte); 397 unsigned long address, pte_t *ptep);
398 398
399/* 399/*
400 * remap a physical page `pfn' of size `size' with page protection `prot' 400 * remap a physical page `pfn' of size `size' with page protection `prot'
diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c
index 3ba990c67676..85df4655d326 100644
--- a/arch/xtensa/mm/cache.c
+++ b/arch/xtensa/mm/cache.c
@@ -147,9 +147,9 @@ void flush_cache_page(struct vm_area_struct* vma, unsigned long address,
147#endif 147#endif
148 148
149void 149void
150update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t pte) 150update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep)
151{ 151{
152 unsigned long pfn = pte_pfn(pte); 152 unsigned long pfn = pte_pfn(*ptep);
153 struct page *page; 153 struct page *page;
154 154
155 if (!pfn_valid(pfn)) 155 if (!pfn_valid(pfn))