aboutsummaryrefslogtreecommitdiffstats
path: root/include
Commit message (Collapse)AuthorAge
* [PATCH] mm: pte_offset_map_lock loopsHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | Convert those common loops using page_table_lock on the outside and pte_offset_map within to use just pte_offset_map_lock within instead. These all hold mmap_sem (some exclusively, some not), so at no level can a page table be whipped away from beneath them. But whereas pte_alloc loops tested with the "atomic" pmd_present, these loops are testing with pmd_none, which on i386 PAE tests both lower and upper halves. That's now unsafe, so add a cast into pmd_none to test only the vital lower half: we lose a little sensitivity to a corrupt middle directory, but not enough to worry about. It appears that i386 and UML were the only architectures vulnerable in this way, and pgd and pud no problem. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: ptd_alloc take ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | Second step in pushing down the page_table_lock. Remove the temporary bridging hack from __pud_alloc, __pmd_alloc, __pte_alloc: expect callers not to hold page_table_lock, whether it's on init_mm or a user mm; take page_table_lock internally to check if a racing task already allocated. Convert their callers from common code. But avoid coming back to change them again later: instead of moving the spin_lock(&mm->page_table_lock) down, switch over to new macros pte_alloc_map_lock and pte_unmap_unlock, which encapsulate the mapping+locking and unlocking+unmapping together, and in the end may use alternatives to the mm page_table_lock itself. These callers all hold mmap_sem (some exclusively, some not), so at no level can a page table be whipped away from beneath them; and pte_alloc uses the "atomic" pmd_present to test whether it needs to allocate. It appears that on all arches we can safely descend without page_table_lock. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: ptd_alloc inline and outHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | It seems odd to me that, whereas pud_alloc and pmd_alloc test inline, only calling out-of-line __pud_alloc __pmd_alloc if allocation needed, pte_alloc_map and pte_alloc_kernel are entirely out-of-line. Though it does add a little to kernel size, change them to macros testing inline, calling __pte_alloc or __pte_alloc_kernel to allocate out-of-line. Mark none of them as fastcalls, leave that to CONFIG_REGPARM or not. It also seems more natural for the out-of-line functions to leave the offset calculation and map to the inline, which has to do it anyway for the common case. At least mremap move wants __pte_alloc without _map. Macros rather than inline functions, certainly to avoid the header file issues which arise from CONFIG_HIGHPTE needing kmap_types.h, but also in case any architectures I haven't built would have other such problems. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: init_mm without ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First step in pushing down the page_table_lock. init_mm.page_table_lock has been used throughout the architectures (usually for ioremap): not to serialize kernel address space allocation (that's usually vmlist_lock), but because pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it. Reverse that: don't lock or unlock init_mm.page_table_lock in any of the architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take and drop it when allocating a new one, to check lest a racing task already did. Similarly no page_table_lock in vmalloc's map_vm_area. Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle user mms, which are converted only by a later patch, for now they have to lock differently according to whether or not it's init_mm. If sources get muddled, there's a danger that an arch source taking init_mm.page_table_lock will be mixed with common source also taking it (or neither take it). So break the rules and make another change, which should break the build for such a mismatch: remove the redundant mm arg from pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13). Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64 used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64 map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free took page_table_lock for no good reason. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: ia64 use expand_upwardsHugh Dickins2005-10-30
| | | | | | | | | | | | | | | ia64 has expand_backing_store function for growing its Register Backing Store vma upwards. But more complete code for this purpose is found in the CONFIG_STACK_GROWSUP part of mm/mmap.c. Uglify its #ifdefs further to provide expand_upwards for ia64 as well as expand_stack for parisc. The Register Backing Store vma should be marked VM_ACCOUNT. Implement the intention of growing it only a page at a time, instead of passing an address outside of the vma to handle_mm_fault, with unknown consequences. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: mm_struct hiwaters movedHugh Dickins2005-10-30
| | | | | | | | | | | | | | Slight and timid rearrangement of mm_struct: hiwater_rss and hiwater_vm were tacked on the end, but it seems better to keep them near _file_rss, _anon_rss and total_vm, in the same cacheline on those arches verified. There are likely to be more profitable rearrangements, but less obvious (is it good or bad that saved_auxv[AT_VECTOR_SIZE] isolates cpu_vm_mask and context from many others?), needing serious instrumentation. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: update_hiwaters just in timeHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | update_mem_hiwater has attracted various criticisms, in particular from those concerned with mm scalability. Originally it was called whenever rss or total_vm got raised. Then many of those callsites were replaced by a timer tick call from account_system_time. Now Frank van Maarseveen reports that to be found inadequate. How about this? Works for Frank. Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros update_hiwater_rss and update_hiwater_vm. Don't attempt to keep mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually by 1): those are hot paths. Do the opposite, update only when about to lower rss (usually by many), or just before final accounting in do_exit. Handle mm->hiwater_vm in the same way, though it's much less of an issue. Demand that whoever collects these hiwater statistics do the work of taking the maximum with rss or total_vm. And there has been no collector of these hiwater statistics in the tree. The new convention needs an example, so match Frank's usage by adding a VmPeak line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS (High-Water-Mark or High-Water-Memory). There was a particular anomaly during mremap move, that hiwater_vm might be captured too high. A fleeting such anomaly remains, but it's quickly corrected now, whereas before it would stick. What locking? None: if the app is racy then these statistics will be racy, it's not worth any overhead to make them exact. But whenever it suits, hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under page_table_lock (for now) or with preemption disabled (later on): without going to any trouble, minimize the time between reading current values and updating, to minimize those occasions when a racing thread bumps a count up and back down in between. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] core remove PageReservedNick Piggin2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove PageReserved() calls from core code by tightening VM_RESERVED handling in mm/ to cover PageReserved functionality. PageReserved special casing is removed from get_page and put_page. All setting and clearing of PageReserved is retained, and it is now flagged in the page_alloc checks to help ensure we don't introduce any refcount based freeing of Reserved pages. MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being deprecated. We never completely handled it correctly anyway, and is be reintroduced in future if required (Hugh has a proof of concept). Once PageReserved() calls are removed from kernel/power/swsusp.c, and all arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can be trivially removed. Last real user of PageReserved is swsusp, which uses PageReserved to determine whether a struct page points to valid memory or not. This still needs to be addressed (a generic page_is_ram() should work). A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and thus mapcounted and count towards shared rss). These writes to the struct page could cause excessive cacheline bouncing on big systems. There are a number of ways this could be addressed if it is an issue. Signed-off-by: Nick Piggin <npiggin@suse.de> Refcount bug fix for filemap_xip.c Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: rss = file_rss + anon_rssHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | I was lazy when we added anon_rss, and chose to change as few places as possible. So currently each anonymous page has to be counted twice, in rss and in anon_rss. Which won't be so good if those are atomic counts in some configurations. Change that around: keep file_rss and anon_rss separately, and add them together (with get_mm_rss macro) when the total is needed - reading two atomics is much cheaper than updating two atomics. And update anon_rss upfront, typically in memory.c, not tucked away in page_add_anon_rmap. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: tlb_finish_mmu forget rssHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | zap_pte_range has been counting the pages it frees in tlb->freed, then tlb_finish_mmu has used that to update the mm's rss. That got stranger when I added anon_rss, yet updated it by a different route; and stranger when rss and anon_rss became mm_counters with special access macros. And it would no longer be viable if we're relying on page_table_lock to stabilize the mm_counter, but calling tlb_finish_mmu outside that lock. Remove the mmu_gather's freed field, let tlb_finish_mmu stick to its own business, just decrement the rss mm_counter in zap_pte_range (yes, there was some point to batching the update, and a subsequent patch restores that). And forget the anal paranoia of first reading the counter to avoid going negative - if rss does go negative, just fix that bug. Remove the mmu_gather's flushes and avoided_flushes from arm and arm26: no use was being made of them. But arm26 alone was actually using the freed, in the way some others use need_flush: give it a need_flush. arm26 seems to prefer spaces to tabs here: respect that. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: tlb_is_full_mm was obscureHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | tlb_is_full_mm? What does that mean? The TLB is full? No, it means that the mm's last user has gone and the whole mm is being torn down. And it's an inline function because sparc64 uses a different (slightly better) "tlb_frozen" name for the flag others call "fullmm". And now the ptep_get_and_clear_full macro used in zap_pte_range refers directly to tlb->fullmm, which would be wrong for sparc64. Rather than correct that, I'd prefer to scrap tlb_is_full_mm altogether, and change sparc64 to just use the same poor name as everyone else - is that okay? Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: tlb_gather_mmu get_cpu_varHugh Dickins2005-10-30
| | | | | | | | | | | | | tlb_gather_mmu dates from before kernel preemption was allowed, and uses smp_processor_id or __get_cpu_var to find its per-cpu mmu_gather. That works because it's currently only called after getting page_table_lock, which is not dropped until after the matching tlb_finish_mmu. But don't rely on that, it will soon change: now disable preemption internally by proper get_cpu_var in tlb_gather_mmu, put_cpu_var in tlb_finish_mmu. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: page fault handlers tidyupHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | Impose a little more consistency on the page fault handlers do_wp_page, do_swap_page, do_anonymous_page, do_no_page, do_file_page: why not pass their arguments in the same order, called the same names? break_cow is all very well, but what it did was inlined elsewhere: easier to compare if it's brought back into do_wp_page. do_file_page's fallback to do_no_page dates from a time when we were testing pte_file by using it wherever possible: currently it's peculiar to nonlinear vmas, so just check that. BUG_ON if not? Better not, it's probably page table corruption, so just show the pte: hmm, there's a pte_ERROR macro, let's use that for do_wp_page's invalid pfn too. Hah! Someone in the ppc64 world noticed pte_ERROR was unused so removed it: restored (and say "pud" not "pmd" in its pud_ERROR). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: unlink_file_vma, remove_vmaHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | Divide remove_vm_struct into two parts: first anon_vma_unlink plus unlink_file_vma, to unlink the vma from the list and tree by which rmap or vmtruncate might find it; then remove_vma to close, fput and free. The intention here is to do the anon_vma_unlink and unlink_file_vma earlier, in free_pgtables before freeing any page tables: so we can be sure that any page tables traversed by rmap and vmtruncate are stable (and other, ordinary cases are stabilized by holding mmap_sem). This will be crucial to traversing pgd,pud,pmd without page_table_lock. But testing the split-out patch showed that lifting the page_table_lock is symbiotically necessary to make this change - the lock ordering is wrong to move those unlinks into free_pgtables while it's under ptlock. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: vm_stat_account unshackledHugh Dickins2005-10-30
| | | | | | | | | | The original vm_stat_account has fallen into disuse, with only one user, and only one user of vm_stat_unaccount. It's easier to keep track if we convert them all to __vm_stat_account, then free it from its __shackles. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Convert mempolicies to nodemask_tAndi Kleen2005-10-30
| | | | | | | | | | | The NUMA policy code predated nodemask_t so it used open coded bitmaps. Convert everything to nodemask_t. Big patch, but shouldn't have any actual behaviour changes (except I removed one unnecessary check against node_online_map and one unnecessary BUG_ON) Signed-off-by: "Andi Kleen" <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] add sem_is_read/write_locked()Rik Van Riel2005-10-30
| | | | | | | | | | Add sem_is_read/write_locked functions to the read/write semaphores, along the same lines of the *_is_locked spinlock functions. The swap token tuning patch uses sem_is_read_locked; sem_is_write_locked is added for completeness. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fix alpha breakageIvan Kokshaysky2005-10-30
| | | | | | | | barrier.h uses barrier() in non-SMP case. And doesn't include compiler.h. Cc: Al Viro <viro@ftp.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] vmalloc_nodeChristoph Lameter2005-10-30
| | | | | | | | | | | | | | | This patch adds vmalloc_node(size, node) -> Allocate necessary memory on the specified node and get_vm_area_node(size, flags, node) and the other functions that it depends on. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds2005-10-29
|\
| * [ARM] 3061/1: cleanup the XIP link address messNicolas Pitre2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch from Nicolas Pitre Since vmlinux.lds.S is preprocessed, we can use the defines already present in asm/memory.h (allowed by patch #3060) for the XIP kernel link address instead of relying on a duplicated Makefile hardcoded value, and also get rid of its dependency on awk to handle it at the same time. While at it let's clean XIP stuff even further and make things clearer in head.S with a nice code reduction. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] 3060/1: allow constants found in asm/memory.h to be used in asm codeNicolas Pitre2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | Patch from Nicolas Pitre This patch allows for assorted type of cleanups by letting assembly code use the same set of defines for constant values and avoid duplicated definitions that might not always be in sync, or that might simply be confusing due to the different names for the same thing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] 3022/1: Missing peripheral devices memory mapping definition for ↵Kenneth Tan2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IXP46X processor Patch from Kenneth Tan Defining IXP46X peripheral devices memory mapping definitions that have been missed out: o Peripheral virtual base address is being adjusted to allow more headroom to add extra peripheral device addresses o Peripheral size is being increased to address the above needs o Virtual address of expansion bus and PCI configuration register needs to be adjusted as new peripheral device memory space is overlapping with their virtual address space Signed-off-by: Kenneth Tan <chong.yin.tan@intel.com> Signed-off-by: Deepak Saxena <dsaxena@plexity.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] 3053/1: introduce ixp2000_reg_wrb (ixp2000_reg_write plus readback)Lennert Buytenhek2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch from Lennert Buytenhek Introduce ixp2000_reg_wrb, which is a variant of ixp2000_reg_write that does a readback from the target register, to make sure that the write has been flushed out of the write buffer. Unlike the previous (ineffective) readback in ixp2000_reg_write, this readback is followed by an instruction that depends on the value of the readback so that the CPU actually stalls until the readback has completed. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Deepak Saxena <dsaxena@plexity.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] 3051/1: turn ixp2000_reg_read into an inline functionLennert Buytenhek2005-10-29
| | | | | | | | | | | | | | | | | | | | Patch from Lennert Buytenhek Turn ixp2000_reg_read into an inline function. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Deepak Saxena <dsaxena@plexity.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] 3050/1: remove ixp2000_reg_write erratum #66 workaroundLennert Buytenhek2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch from Lennert Buytenhek The workaround that we do for avoiding triggering ixp2400 erratum #66 involves mapping I/O pages using XCB=101 instead of XCB=000 so that we prevent the I/O signal to the gasket from being asserted (which can cause data corruption.) But XCB=101 mappings are write-buffered while mappings using XCB=000 are not, which is why if we use XCB=101 mappings we do a readback for every CSR store in an attempt to make sure that the store has been pushed out of the xscale core and the gasket. Unfortunately, there are two issues with this: - we do a readback for every CSR store, which is wrong, because the register we are writing to might have unwanted side-effects on read, for example, in the case of the scratchpad ring enqueue/dequeue registers; and - the readback is totally ineffective in the way we currently do it, because we just issue a load but do not issue any instruction that depends on the return value of that load, so the xscale core does not wait for the load to complete before continuing. See this linux-arm-kernel mailing list post for further information: http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2005-September/031314.html This means that my ixp2400 boxes have been running for many months without a working readback in ixp2000_reg_write, without any apparent adverse effects. Two of them have been running for a week now with the actual readback deleted from ixp2000_reg_write, also without any apparent ill effects. So, because in its current form it does more harm than good, the readback in ixp2000_reg_write should simply be killed, as the patch below does. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: Deepak Saxena <dsaxena@plexity.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] Allow MTD device name to be passed via platform dataRussell King2005-10-29
| | | | | | | | | | | | | | Allow SA1100 devices to pass the name of the flash device to the SA1100 map driver. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] Fix buggy __phys_to_pfn / __pfn_to_physRussell King2005-10-29
| | | | | | | | | | | | | | Macro arguments should _always_ be surrounded by parentheses when used to prevent unexpected problems with operator precedence. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * [ARM] Ensure machine information structures aren't optimised awayRussell King2005-10-29
| | | | | | | | | | | | | | | | Since the machine information structures are now static, the compiler might optimise them away. Mark them with __attribute_used__ to prevent this occuring. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linusLinus Torvalds2005-10-29
|\ \
| * | BCM1480 HT supportAndrew Isaacson2005-10-29
| | | | | | | | | | | | | | | | | | | | | PCI support code for PLX 7250 PCI-X tunnel on BCM91480B BigSur board. Signed-Off-By: Andy Isaacson <adi@broadcom.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Support for BigSur board.Andrew Isaacson2005-10-29
| | | | | | | | | | | | | | | Signed-Off-By: Andy Isaacson <adi@broadcom.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Add support for SB1A CPU.Andrew Isaacson2005-10-29
| | | | | | | | | | | | | | | Signed-Off-By: Andy Isaacson <adi@broadcom.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Sibyte header cleanupAndrew Isaacson2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Update sibyte headers to match Broadcom internal copies: - comment cleanup and updates - fix LittleSur part number to match the board silkscreen Signed-Off-By: Andy Isaacson <adi@broadcom.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | BCM1480 headersAndrew Isaacson2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add header files for BCM1480/1280/1455/1255 family of chips, and update sb1250 headers which are shared by BCM1480 family. Signed-Off-By: Andy Isaacson <adi@broadcom.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org> diff --git a/include/asm-mips/sibyte/bcm1480_int.h b/include/asm-mips/sibyte/bcm1480_int.h new file mode 100644
| * | Make UL what should be UL.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Don't print file name and line in die and die_if_kernel.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Define EOWNERDEAD and ENOTRECOVERABLE.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | More configcheck fixes.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | 2.6.14-rc1 updates for MIPS compat types.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Complete the fcntl.h cleanup.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Cleanup Sibyte Kconfig a bit further.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Fix weirdness in <asm/bug.h>Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Fixup a few lose ends in explicit support for MIPS R1/R2.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Provide 64-bit address space definitions for the Sibyte SB1 CPU core.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Cleanup the mess in cpu_cache_init.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Use R4000 TLB routines for SB1 also.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Fix ARCH_KMALLOC_MINALIGN values on MIPSRalf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Support for MIPSsim, the cycle accurate MIPS simulator.Ralf Baechle2005-10-29
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | Revise MIPS 64-bit ptrace interfaceDaniel Jacobowitz2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | Change the N32 debugging ABI to something more sane, and add support for o32 and n32 debuggers to trace n64 programs. Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>