aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-sparc64/tsb.h
Commit message (Collapse)AuthorAge
* [SPARC64]: Get DEBUG_PAGEALLOC working again.David S. Miller2007-03-16
| | | | | | | | | | | | | We have to make sure to use base-pagesize TLB entries even during the early transition period where we need TLB miss handling but don't have the kernel page tables setup yet for the linear region. Also, it is necessary therefore to not use the 4MB TSB for these translations, and instead use the normal kernel TSB. This allows us to also get rid of the 4MB tsb for debug builds which shrinks the kernel a little bit. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Create a seperate kernel TSB for 4MB/256MB mappings.David S. Miller2006-03-20
| | | | | | | | | | It can map all of the linear kernel mappings with zero TSB hash conflicts for systems with 16GB or less ram. In such cases, on SUN4V, once we load up this TSB the first time with all the mappings, we never take a linear kernel mapping TLB miss ever again, the hypervisor handles them all. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: More TLB/TSB handling fixes.David S. Miller2006-03-20
| | | | | | | | | | | | | | | The SUN4V convention with non-shared TSBs is that the context bit of the TAG is clear. So we have to choose an "invalid" bit and initialize new TSBs appropriately. Otherwise a zero TAG looks "valid". Make sure, for the window fixup cases, that we use the right global registers and that we don't potentially trample on the live global registers in etrap/rtrap handling (%g2 and %g6) and that we put the missing virtual address properly in %g5. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Initial sun4v TLB miss handling infrastructure.David S. Miller2006-03-20
| | | | | | | | | | Things are a little tricky because, unlike sun4u, we have to: 1) do a hypervisor trap to do the TLB load. 2) do the TSB lookup calculations by hand Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Access TSB with physical addresses when possible.David S. Miller2006-03-20
| | | | | | | | | | | | | This way we don't need to lock the TSB into the TLB. The trick is that every TSB load/store is registered into a special instruction patch section. The default uses virtual addresses, and the patch instructions use physical address load/stores. We can't do this on all chips because only cheetah+ and later have the physical variant of the atomic quad load. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill out-of-date commentary in asm-sparc64/tsb.hDavid S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Increase swapper_tsb size to 32K.David S. Miller2006-03-20
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix incorrect TSB lock bit handling.David S. Miller2006-03-20
| | | | | | | | | | The TSB_LOCK_BIT define is actually a special value shifted down by 32-bits for the assembler code macros. In C code, this isn't what we want. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add infrastructure for dynamic TSB sizing.David S. Miller2006-03-20
| | | | | | | | | | | | | | | This also cleans up tsb_context_switch(). The assembler routine is now __tsb_context_switch() and the former is an inline function that picks out the bits from the mm_struct and passes it into the assembler code as arguments. setup_tsb_parms() computes the locked TLB entry to map the TSB. Later when we support using the physical address quad load instructions of Cheetah+ and later, we'll simply use the physical address for the TSB register value and set the map virtual and PTE both to zero. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move away from virtual page tables, part 1.David S. Miller2006-03-20
We now use the TSB hardware assist features of the UltraSPARC MMUs. SMP is currently knowingly broken, we need to find another place to store the per-cpu base pointers. We hid them away in the TSB base register, and that obviously will not work any more :-) Another known broken case is non-8KB base page size. Also noticed that flush_tlb_all() is not referenced anywhere, only the internal __flush_tlb_all() (local cpu only) is used by the sparc64 port, so we can get rid of flush_tlb_all(). The kernel gets it's own 8KB TSB (swapper_tsb) and each address space gets it's own private 8K TSB. Later we can add code to dynamically increase the size of per-process TSB as the RSS grows. An 8KB TSB is good enough for up to about a 4MB RSS, after which the TSB starts to incur many capacity and conflict misses. We even accumulate OBP translations into the kernel TSB. Another area for refinement is large page size support. We could use a secondary address space TSB to handle those. Signed-off-by: David S. Miller <davem@davemloft.net>