aboutsummaryrefslogtreecommitdiffstats
path: root/arch/sparc64/mm
Commit message (Collapse)AuthorAge
* [PATCH] sparc64: fix set_page_count merge clashNick Piggin2006-03-23
| | | | | | | | | Merge clash will have broken sparc64. Synch up its online_page implementation with powerpc, which was identical until the set_page_count removal. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2006-03-22
|\ | | | | | | | | | | * master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Add a secondary TSB for hugepage mappings. [SPARC]: Respect vm_page_prot in io_remap_page_range().
| * [SPARC64]: Add a secondary TSB for hugepage mappings.David S. Miller2006-03-22
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| * [SPARC]: Respect vm_page_prot in io_remap_page_range().David S. Miller2006-03-22
| | | | | | | | | | | | | | | | | | Make sure the callers do a pgprot_noncached() on vma->vm_page_prot. Pointed out by Hugh Dickens. Signed-off-by: David S. Miller <davem@davemloft.net>
* | [PATCH] hugepage: is_aligned_hugepage_range() cleanupDavid Gibson2006-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] remove set_page_count() outside mm/Nick Piggin2006-03-22
|/ | | | | | | | | | | | | set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Fix 2 bugs in huge page support.David S. Miller2006-03-20
| | | | | | | | | | 1) huge_pte_offset() did not check the page table hierarchy elements as being empty correctly, resulting in an OOPS 2) Need platform specific hugetlb_get_unmapped_area() to handle the top-down vs. bottom-up address space allocation strategies. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Optimized TSB table initialization.David S. Miller2006-03-20
| | | | | | | | | | | | | | We only need to write an invalid tag every 16 bytes, so taking advantage of this can save many instructions compared to the simple memset() call we make now. A prefetching implementation is implemented for sun4u and a block-init store version if implemented for Niagara. The next trick is to be able to perform an init and a copy_tsb() in parallel when growing a TSB table. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Allow CONFIG_MEMORY_HOTPLUG to build.David S. Miller2006-03-20
| | | | | | | | | | online_page() is straightforward, and then add a dummy remove_memory() that returns -EINVAL just like i386. There is no point in implementing remove_memory() since __remove_pages() has no implementation either. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use SLAB caches for TSB tables.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Don't kill the page allocator when growing a TSB.David S. Miller2006-03-20
| | | | | | | | | | | | Try only lightly on > 1 order allocations. If a grow fails, we are under memory pressure, so do not try to grow the TSB for this address space any more. If a > 0 order TSB allocation fails on a new fork, retry using a 0 order allocation. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix and re-enable dynamic TSB sizing.David S. Miller2006-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is good for up to %50 performance improvement of some test cases. The problem has been the race conditions, and hopefully I've plugged them all up here. 1) There was a serious race in switch_mm() wrt. lazy TLB switching to and from kernel threads. We could erroneously skip a tsb_context_switch() and thus use a stale TSB across a TSB grow event. There is a big comment now in that function describing exactly how it can happen. 2) All code paths that do something with the TSB need to be guarded with the mm->context.lock spinlock. This makes page table flushing paths properly synchronize with both TSB growing and TLB context changes. 3) TSB growing events are moved to the end of successful fault processing. Previously it was in update_mmu_cache() but that is deadlock prone. At the end of do_sparc64_fault() we hold no spinlocks that could deadlock the TSB grow sequence. We also have dropped the address space semaphore. While we're here, add prefetching to the copy_tsb() routine and put it in assembler into the tsb.S file. This piece of code is quite time critical. There are some small negative side effects to this code which can be improved upon. In particular we grab the mm->context.lock even for the tsb insert done by update_mmu_cache() now and that's a bit excessive. We can get rid of that locking, and the same lock taking in flush_tsb_user(), by disabling PSTATE_IE around the whole operation including the capturing of the tsb pointer and tsb_nentries value. That would work because anyone growing the TSB won't free up the old TSB until all cpus respond to the TSB change cross call. I'm not quite so confident in that optimization to put it in right now, but eventually we might be able to and the description is here for reference. This code seems very solid now. It passes several parallel GCC bootstrap builds, and our favorite "nut cruncher" stress test which is a full "make -j8192" build of a "make allmodconfig" kernel. That puts about 256 processes on each cpu's run queue, makes lots of process cpu migrations occur, causes lots of page table and TLB flushing activity, incurs many context version number changes, and it swaps the machine real far out to disk even though there is 16GB of ram on this test system. :-) Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix 32-bit truncation which broke sparsemem.David S. Miller2006-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page->flags manipulations done by the D-cache dirty state tracking was broken because the constants were not marked with "UL" to make them 64-bit, which means we were clobbering the upper 32-bits of page->flags all the time. This doesn't jive well with sparsemem which stores the section and indexing information in the top 32-bits of page->flags. This is yet another sparc64 bug which has been with us forever. While we're here, tidy up some things in bootmem_init() and paginig_init(): 1) Pass min_low_pfn to init_bootmem_node(), it's identical to (phys_base >> PAGE_SHIFT) but we should use consistent with the variable names we print in CONFIG_BOOTMEM_DEBUG 2) max_mapnr, although no longer used, was being set inaccurately, we shouldn't subtract pfn_base any more. 3) All the games with phys_base in the zones_*[] arrays we pass to free_area_init_node() are no longer necessary. Thanks to Josh Grebe and Fabbione for the bug reports and testing. Fix also verified locally on an SB2500 which had a memory layout that triggered the same problem. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move over to sparsemem.David S. Miller2006-03-20
| | | | | | | | This has been pending for a long time, and the fact that we waste a ton of ram on some configurations kind of pushed things over the edge. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix new context version SMP handling.David S. Miller2006-03-20
| | | | | | | | | | | | | | | | Don't piggy back the SMP receive signal code to do the context version change handling. Instead allocate another fixed PIL number for this asynchronous cross-call. We can't use smp_call_function() because this thing is invoked with interrupts disabled and a few spinlocks held. Also, fix smp_call_function_mask() to count "cpus" correctly. There is no guarentee that the local cpu is in the mask yet that is exactly what this code was assuming. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Bulletproof MMU context locking.David S. Miller2006-03-20
| | | | | | | | | | | 1) Always spin_lock_init() in init_context(). The caller essentially clears it out, or copies the mm info from the parent. In both cases we need to explicitly initialize the spinlock. 2) Always do explicit IRQ disabling while taking mm->context.lock and ctx_alloc_lock. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix loop termination in mark_kpte_bitmap()David S. Miller2006-03-20
| | | | | | | | | If we were aligned, but didn't have at least 256MB left to process, we would loop forever. Thanks to fabbione for the report and testing the fix. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Simplify TSB insert checks.David S. Miller2006-03-20
| | | | | | | | | | | | Don't try to avoid putting non-base page sized entries into the user TSB. It actually costs us more to check this than it helps. Eventually we'll have a multiple TSB scheme for user processes. Once a process starts using larger pages, we'll allocate and use such a TSB. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Avoid dcache-dirty page state management on sun4v.David S. Miller2006-03-20
| | | | | | | It is totally wasted work, since we have no D-cache aliasing issues on sun4v. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Bulletproof hypervisor TLB flushing.David S. Miller2006-03-20
| | | | | | | | | | | Check TLB flush hypervisor calls for errors and report them. Pass HV_MMU_ALL always for now, we can add back the optimization to avoid the I-TLB flush later. Always explicitly page align the virtual address arguments. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: destroy_context() needs to disable interrupts.David S. Miller2006-03-20
| | | | | | | | | | get_new_mmu_context() can be invoked from interrupt context now for the new SMP version wrap handling. So disable interrupt while taking ctx_alloc_lock in destroy_context() so we don't deadlock. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix TLB context allocation with SMT style shared TLBs.David S. Miller2006-03-20
| | | | | | | | | | | | | | | The context allocation scheme we use depends upon there being a 1<-->1 mapping from cpu to physical TLB for correctness. Chips like Niagara break this assumption. So what we do is notify all cpus with a cross call when the context version number changes, and if necessary this makes them allocate a valid context for the address space they are running at the time. Stress tested with make -j1024, make -j2048, and make -j4096 kernel builds on a 32-strand, 8 core, T2000 with 16GB of ram. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Export _PAGE_E and _PAGE_CACHE to modules.David S. Miller2006-03-20
| | | | | | | | SBUS flash driver needs it. Noticed by Fabbione. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Create a seperate kernel TSB for 4MB/256MB mappings.David S. Miller2006-03-20
| | | | | | | | | | It can map all of the linear kernel mappings with zero TSB hash conflicts for systems with 16GB or less ram. In such cases, on SUN4V, once we load up this TSB the first time with all the mappings, we never take a linear kernel mapping TLB miss ever again, the hypervisor handles them all. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Make use of Niagara 256MB PTEs for kernel mappings.David S. Miller2006-03-20
| | | | | | | | | | | We use a bitmap, one bit for every 256MB of memory. If the bit is set we can use a 256MB PTE for linear mappings, else we have to use a 4MB PTE. SUN4V support is there, and we can very easily add support for Panther cpu 256MB PTEs in the future. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Export a PAGE_SHARED symbol.David S. Miller2006-03-20
| | | | | | For drivers/media/*, noticed by Fabbione. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: More TLB/TSB handling fixes.David S. Miller2006-03-20
| | | | | | | | | | | | | | | The SUN4V convention with non-shared TSBs is that the context bit of the TAG is clear. So we have to choose an "invalid" bit and initialize new TSBs appropriately. Otherwise a zero TAG looks "valid". Make sure, for the window fixup cases, that we use the right global registers and that we don't potentially trample on the live global registers in etrap/rtrap handling (%g2 and %g6) and that we put the missing virtual address properly in %g5. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Check for errors in hypervisor_tlb_lock().David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Set associativity of kernel TSB descriptor correctly.David S. Miller2006-03-20
| | | | | | It should be 1, not 0. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use phys tsb address in tsb_insert() in SUN4V.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix flush_tsb_user() on SUN4V.David S. Miller2006-03-20
| | | | | | Needs to use physical addressing just like cheetah_plus. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix comment typo in __flush_tlb_kernel_range.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Log faulting vaddr when bogus kernel PC detected.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use inline patching for critical PTE operations.David S. Miller2006-03-20
| | | | | | | This handles the SUN4U vs SUN4V PTE layout differences with near zero performance cost. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move PTE field definitions back into asm/pgtable.hDavid S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Deal with PTE layout differences in SUN4V.David S. Miller2006-03-20
| | | | | | | | | | Yes, you heard it right, they changed the PTE layout for SUN4V. Ho hum... This is the simple and inefficient way to support this. It'll get optimized, don't worry. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Register kernel TSB with hypervisor.David S. Miller2006-03-20
| | | | | | We do this right after we take over the trap table from OBP. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Handle hypervisor case correctly in copy_tsb().David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use ASI_SCRATCHPAD address 0x0 properly.David S. Miller2006-03-20
| | | | | | | | | | | | | This is where the virtual address of the fault status area belongs. To set it up we don't make a hypervisor call, instead we call OBP's SUNW,set-trap-table with the real address of the fault status area as the second argument. And right before that call we write the virtual address into ASI_SCRATCHPAD vaddr 0x0. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix hypervisor call arg passing.David S. Miller2006-03-20
| | | | | | Function goes in %o5, args go in %o0 --> %o5. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Hypervisor TSB context switching.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Detect sun4v early in boot process.David S. Miller2006-03-20
| | | | | | | | | | | | | We look for "SUNW,sun4v" in the 'compatible' property of the root OBP device tree node. Protect every %ver register access, to make sure it is not touched on sun4v, as %ver is hyperprivileged there. Lock kernel TLB entries using hypervisor calls instead of calls into OBP. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Patch up mmu context register writes for sun4v.David S. Miller2006-03-20
| | | | | | sun4v uses ASI_MMU instead of ASI_DMMU Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Register per-cpu fault status area with sun4v hypervisor.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Rename gl_{1,2}insn_patch --> sun4v_{1,2}insn_patchDavid S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Initial sun4v TLB miss handling infrastructure.David S. Miller2006-03-20
| | | | | | | | | | Things are a little tricky because, unlike sun4u, we have to: 1) do a hypervisor trap to do the TLB load. 2) do the TSB lookup calculations by hand Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Sanitize %pstate writes for sun4v.David S. Miller2006-03-20
| | | | | | | | If we're just switching between different alternate global sets, nop it out on sun4v. Also, get rid of all of the alternate global save/restore in the OBP CIF trampoline code. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add some hypervisor tlb_type checks.David S. Miller2006-03-20
| | | | | | | And more consistently check cheetah{,_plus} instead of assuming anything not spitfire is cheetah{,_plus}. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: SUN4V hypervisor TLB flush support code.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Turn off TSB growing for now.David S. Miller2006-03-20
| | | | | | | | | | | | | | | | | | | | | | | | There are several tricky races involved with growing the TSB. So just use base-size TSBs for user contexts and we can revisit enabling this later. One part of the SMP problems is that tsb_context_switch() can see partially updated TSB configuration state if tsb_grow() is running in parallel. That's easily solved with a seqlock taken as a writer by tsb_grow() and taken as a reader to capture all the TSB config state in tsb_context_switch(). Then there is flush_tsb_user() running in parallel with a tsb_grow(). In theory we could take the seqlock as a reader there too, and just resample the TSB pointer and reflush but that looks really ugly. Lastly, I believe there is a case with threads that results in a TSB entry lock bit being set spuriously which will cause the next access to that TSB entry to wedge the cpu (since the TSB entry lock bit will never clear). It's either copy_tsb() or some bug elsewhere in the TSB assembly. Signed-off-by: David S. Miller <davem@davemloft.net>