aboutsummaryrefslogtreecommitdiffstats
path: root/arch/sh/mm/cache-sh7705.c
Commit message (Collapse)AuthorAge
* sh: Mass ctrl_in/outX to __raw_read/writeX conversion.Paul Mundt2010-01-25
| | | | | | | | | | | The old ctrl in/out routines are non-portable and unsuitable for cross-platform use. While drivers/sh has already been sanitized, there is still quite a lot of code that is not. This converts the arch/sh/ bits over, which permits us to flag the routines as deprecated whilst still building with -Werror for the architecture code, and to ensure that future users are not added. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Kill off the special uncached section and fixmap.Paul Mundt2010-01-21
| | | | | | | | Now that cached_to_uncached works as advertized in 32-bit mode and we're never going to be able to map < 16MB anyways, there's no need for the special uncached section. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Obliterate the P1 area macrosMatt Fleming2009-10-10
| | | | | | | | | | | | | Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the idea that all addresses in P1SEG are untranslated, so we can access an address's physical page as an offset from P1SEG. This doesn't work for CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used for PMB mappings and so can be translated to any physical address. Likewise, replace a P1SEGADDR() use with virt_to_phys(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Sprinkle __uses_jump_to_uncachedMatt Fleming2009-10-08
| | | | | | | | Fix some callers of jump_to_uncached() and back_to_cached() that were not annotated with __uses_jump_to_uncached. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Fix up sh7705 flush_dcache_page() build.Paul Mundt2009-09-14
| | | | | | | Type mismatch caused the page deref to blow up, fix it up as per the sh4 change. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Revert "sh: Kill off now redundant local irq disabling."Paul Mundt2009-09-01
| | | | | | | | | | | This reverts commit 64a6d72213dd810dd55bd0a503c36150af41c3c3. Unfortunately we can't use on_each_cpu() for all of the cache ops, as some of them only require preempt disabling. This seems to be the same issue that impacts the mips r4k caches, where this code was based on. This fixes up a deadlock that showed up in some IRQ context cases. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Kill off now redundant local irq disabling.Paul Mundt2009-08-21
| | | | | | | on_each_cpu() takes care of IRQ and preempt handling, the localized handling in each of the called functions can be killed off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Make cache flushers SMP-aware.Paul Mundt2009-08-21
| | | | | | | | This does a bit of rework for making the cache flushers SMP-aware. The function pointer-based flushers are renamed to local variants with the exported interface being commonly implemented and wrapping as necessary. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Convert SH7705 extended mode to new cacheflush interface.Paul Mundt2009-08-14
| | | | Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Migrate from PG_mapped to PG_dcache_dirty.Paul Mundt2009-07-22
| | | | | | | | | | | | | | | | | This inverts the delayed dcache flush a bit to be more in line with other platforms. At the same time this also gives us the ability to do some more optimizations and cleanup. Now that the update_mmu_cache() callsite only tests for the bit, the implementation can gradually be split out and made generic, rather than relying on special implementations for each of the peculiar CPU types. SH7705 in 32kB mode and SH-4 still need slightly different handling, but this is something that can remain isolated in the varying page copy/clear routines. On top of that, SH-X3 is dcache coherent, so there is no need to bother with any of these tests in the PTEAEX version of update_mmu_cache(), so we kill that off too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Preparation for uncached jumps through PMB.Stuart Menefy2008-01-27
| | | | | | | | | | | | Presently most of the 29-bit physical parts do P1/P2 segmentation with a 1:1 cached/uncached mapping, jumping between the two to control the caching behaviour. This provides the basic infrastructure to maintain this behaviour on 32-bit physical parts that don't map P1/P2 at all, using a shiny new linker section and corresponding fixmap entry. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Revert lazy dcache writeback changes.Paul Mundt2007-03-05
| | | | | | | These ended up causing too many problems on older parts, revert for now.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Fixup cpu_data references for the non-boot CPUs.Paul Mundt2007-02-12
| | | | | | | There are a lot of bogus cpu_data-> references that only end up working for the boot CPU, convert these to current_cpu_data to fixup SMP. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Lazy dcache writeback optimizations.Paul Mundt2007-02-12
| | | | | | | | | | | | | | | This converts the lazy dcache handling to the model described in Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a bonus, this slightly cuts down on the cache flushing frequency. With that and the PTEA handling out of the way, the update_mmu_cache() implementations can be consolidated, and we no longer have to worry about which configuration the cache is in for the SH7705 case. And finally, explicitly disable the lazy writeback on SMP (SH-4A). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: More cosmetic cleanups and trivial fixes.Paul Mundt2006-09-27
| | | | | | Nothing exciting here, just trivial fixes.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* [PATCH] Standardize pxx_page macrosDave McCracken2006-09-26
| | | | | | | | | | | | | | | | | | | | | One of the changes necessary for shared page tables is to standardize the pxx_page macros. pte_page and pmd_page have always returned the struct page associated with their entry, while pte_page_kernel and pmd_page_kernel have returned the kernel virtual address. pud_page and pgd_page, on the other hand, return the kernel virtual address. Shared page tables needs pud_page and pgd_page to return the actual page structures. There are very few actual users of these functions, so it is simple to standardize their usage. Since this is basic cleanup, I am submitting these changes as a standalone patch. Per Hugh Dickins' comments about it, I am also changing the pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning. Signed-off-by: Dave McCracken <dmccr@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!