aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-ppc64
Commit message (Collapse)AuthorAge
* [PATCH] asm/signal.h unificationAl Viro2005-05-04
| | | | | | | | | | | | | | | | | New file - asm-generic/signal.h. Contains declarations of __sighandler_t, __sigrestore_t, SIG_DFL, SIG_IGN, SIG_ERR and default definitions of SIG_BLOCK, SIG_UNBLOCK and SIG_SETMASK. asm-*/signal.h switched to including it. The only exception is asm-parisc/signal.h that wants its own declaration of __sighandler_t; that one is left as-is. asm-ppc64/signal.h required one more thing - unlike everybody else it used __sigrestorer_t instead of usual __sigrestore_t. PPC64 switched to common spelling. Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] move SA_xxx defines to linux/signal.hStas Sergeev2005-05-01
| | | | | | | | | | | | | | | | | | | The attached patch moves the IRQ-related SA_xxx flags (namely, SA_PROBE, SA_SAMPLE_RANDOM and SA_SHIRQ) from all the arch-specific headers to linux/signal.h. This looks like a left-over after the irq-handling code was consolidated. The code was moved to kernel/irq/*, but the flags are still left per-arch. Right now, adding a new IRQ flag to the arch-specific header, like this patch does: http://cvs.sourceforge.net/viewcvs.py/*checkout*/alsa/alsa-driver/utils/patches/pcsp-kernel-2.6.10-03.diff?rev=1.1 no longer works, it breaks the compilation for all other arches, unless you add that flag to all the other arch-specific headers too. So I think such a clean-up makes sense. Signed-off-by: Stas Sergeev <stsp@aknet.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] remove all kernel BUGsMatt Mackall2005-05-01
| | | | | | | | | This patch eliminates all kernel BUGs, trims about 35k off the typical kernel, and makes the system slightly faster. Signed-off-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: reverse prediction on spinlock busy loop codeJake Moilanen2005-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On our raw spinlocks, we currently have an attempt at the lock, and if we do not get it we enter a spin loop. This spinloop will likely continue for awhile, and we pridict likely. Shouldn't we predict that we will get out of the loop so our next instructions are already prefetched. Even when we miss because the lock is still held, it won't matter since we are waiting anyways. I did a couple quick benchmarks, but the results are inconclusive. 16-way 690 running specjbb with original code # ./specjbb 3000 16 1 1 19 30 120 ... Valid run, Score is 59282 16-way 690 running specjbb with unlikely code # ./specjbb 3000 16 1 1 19 30 120 ... Valid run, Score is 59541 I saw a smaller increase on a JS20 (~1.6%) JS20 specjbb w/ original code # ./specjbb 400 2 1 1 19 30 120 ... Valid run, Score is 20460 JS20 specjbb w/ unlikely code # ./specjbb 400 2 1 1 19 30 120 ... Valid run, Score is 20803 Anton said: Mispredicting the spinlock busy loop also means we slow down the rate at which we do the loads which can be good for heavily contended locks. Note: There are some gcc issues with our default build and branch prediction, but a CONFIG_POWER4_ONLY build should emit them correctly. I'm working with Alan Modra on it now. Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: remove unnecessary includeAnton Blanchard2005-05-01
| | | | | | | | We no longer use any ppcdebug stuff in a.out.h, so remove the define. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: noexec fixesAnton Blanchard2005-05-01
| | | | | | | | | | | | | | | | | | There were a few issues with the ppc64 noexec support: The 64bit ABI has a non executable stack by default. At the moment 64bit apps require a PT_GNU_STACK section in order to have a non executable stack. Disable the read implies exec workaround on the 64bit ABI. The 64bit toolchain has never had problems with incorrect mmap permissions (the 32bit has, thats why we need to retain the workaround). With these fixes as well as a gcc fix from Alan Modra (that was recently committed) 64bit apps work as expected. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: update to use the new 4L headersBenjamin Herrenschmidt2005-05-01
| | | | | | | | | This patch converts ppc64 to use the generic pgtable-nopud.h instead of the "fixup" header. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] freepgt: arch FIRST_USER_ADDRESS 0Hugh Dickins2005-04-19
| | | | | | | | | Replace misleading definition of FIRST_USER_PGD_NR 0 by definition of FIRST_USER_ADDRESS 0 in all the MMU architectures beyond arm and arm26. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] freepgt: hugetlb_free_pgd_rangeHugh Dickins2005-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being called, and it wasn't obvious what to do about them. The ppc64 case turns out to be easy: the associated tables are noted elsewhere and freed later, safe to either skip its hugetlb areas or go through the motions of freeing nothing. Since ia64 does need a special case, restore to ppc64 the special case of skipping them. The ia64 hugetlb case has been broken since pgd_addr_end went in, though it probably appeared to work okay if you just had one such area; in fact it's been broken much longer if you consider a long munmap spanning from another region into the hugetlb region. In the ia64 hugetlb region, more virtual address bits are available than in the other regions, yet the page tables are structured the same way: the page at the bottom is larger. Here we need to scale down each addr before passing it to the standard free_pgd_range. Was about to write a hugely_scaled_down macro, but found htlbpage_to_page already exists for just this purpose. Fixed off-by-one in ia64 is_hugepage_only_range. Uninline free_pgd_range to make it available to ia64. Make sure the vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any other (safe to join huges? probably but don't bother). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] freepgt: remove MM_VM_SIZE(mm)Hugh Dickins2005-04-19
| | | | | | | | | | | | | | | | | | | | There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro because mm doesn't contain the (32-bit emulation?) info needed. But it too is only needed because we ignore the end from the vma list. We could make flush_pgtables return that end, or unmap_vmas. Choose the latter, since it's a natural fit with unmap_mapping_range_vma needing to know its restart addr. This does make more than minimal change, but if unmap_vmas had returned the end before, this is how we'd have done it, rather than storing the break_addr in zap_details. unmap_vmas used to return count of vmas scanned, but that's just debug which hasn't been useful in a while; and if we want the map_count 0 on exit check back, it can easily come from the final remove_vm_struct loop. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: no prefetch for NULL pointersOlof Johansson2005-04-16
| | | | | | | | | | | | For prefetches of NULL (as when walking a short linked list), PPC64 will in some cases take a performance hit. The hardware needs to do the TLB walk, and said walk will always miss, which means (up to) two L2 misses as penalty. This seems to hurt overall performance, so for NULL pointers skip the prefetch alltogether. Signed-off-by: Olof Johansson <olof@austin.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: Improve mapping of vDSOBenjamin Herrenschmidt2005-04-16
| | | | | | | | | | | | | | | | | | | | This patch reworks the way the ppc64 is mapped in user memory by the kernel to make it more robust against possible collisions with executable segments. Instead of just whacking a VMA at 1Mb, I now use get_unmapped_area() with a hint, and I moved the mapping of the vDSO to after the mapping of the various ELF segments and of the interpreter, so that conflicts get caught properly (it still has to be before create_elf_tables since the later will fill the AT_SYSINFO_EHDR with the proper address). While I was at it, I also changed the 32 and 64 bits vDSO's to link at their "natural" address of 1Mb instead of 0. This is the address where they are normally mapped in absence of conflict. By doing so, it should be possible to properly prelink one it's been verified to work on glibc. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!