aboutsummaryrefslogtreecommitdiffstats
path: root/arch/ia64
Commit message (Collapse)AuthorAge
* revert "PIE randomization"Andrew Morton2007-07-21
| | | | | | | | | | | | | | | | | | | There are reports of this causing userspace failures (http://lkml.org/lkml/2007/7/20/421). Revert. Cc: Jan Kratochvil <honza@jikos.cz> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Ingo Molnar <mingo@elte.hu> Cc: Roland McGrath <roland@redhat.com> Cc: Jakub Jelinek <jakub@redhat.com> Cc: Ulrich Kunitz <kune@deine-taler.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Bret Towe" <magnade@gmail.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'release' of ↵Linus Torvalds2007-07-20
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Prevent people from directly including <asm/rwsem.h>. [IA64] remove time interpolator [IA64] Convert to generic timekeeping/clocksource [IA64] refresh some config files for 64K pagesize [IA64] Delete iosapic_free_rte() [IA64] fallocate system call [IA64] Enable percpu vector domain for IA64_DIG [IA64] Enable percpu vector domain for IA64_GENERIC [IA64] Support irq migration across domain [IA64] Add support for vector domain [IA64] Add mapping table between irq and vector [IA64] Check if irq is sharable [IA64] Fix invalid irq vector assumption for iosapic [IA64] Use dynamic irq for iosapic interrupts [IA64] Use per iosapic lock for indirect iosapic register access [IA64] Cleanup lock order in iosapic_register_intr [IA64] Remove duplicated members in iosapic_rte_info [IA64] Remove block structure for locking in iosapic.c
| * Pull ia64-clocksource into release branchTony Luck2007-07-20
| |\
| | * [IA64] Convert to generic timekeeping/clocksourceTony Luck2007-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is a merge of Peter Keilty's initial patch (which was revived by Bob Picco) for this with Hidetoshi Seto's fixes and scaling improvements. Acked-by: Bob Picco <bob.picco@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | Pull vector-domain into release branchTony Luck2007-07-19
| |\ \
| | * | [IA64] Delete iosapic_free_rte()Yasuaki Ishimatsu2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > arch/ia64/kernel/iosapic.c:597: warning: 'iosapic_free_rte' defined but not used > > This isn't spurious, the only call to iosapic_free_rte() has been removed, but there > is still a call to iosapic_alloc_rte() ... which means we must have a memory leak. I did it on purpose (and gave the warning a miss...) and I consider iosapic_free_rte() is no longer needed. I decided to remain iosapic_rte_info to keep gsi-to-irq binding after device disable. Indeed it needs some extra memory, but it is only "sizeof(iosapic_rte_info) * <the number of removed devices>" bytes and has no memory leak becasue re-enabled devices use the iosapic_rte_info which they used before disabling. Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Enable percpu vector domain for IA64_DIGYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add per-CPU vector domain support for IA64_DIG. It is enabled by adding the "vector=percpu" boot option. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Enable percpu vector domain for IA64_GENERICYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add per-CPU vector domain support for IA64_GENERIC. It is enabled by adding the "vector=percpu" boot option. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Support irq migration across domainYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for IRQ migration across vector domain. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Add support for vector domainYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add fundamental support for multiple vector domain. There still exists only one vector domain even with this patch. IRQ migration across domain is not supported yet by this patch. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Add mapping table between irq and vectorYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add mapping tables between irqs and vectors, and its management code. This is necessary for supporting multiple vector domain because 1:1 mapping between irq and vector will be changed to n:1. The irq == vector relationship between irqs and vectors is explicitly remained for percpu interrupts, platform interrupts, isa IRQs and vectors assigned using assign_irq_vector() because some programs might depend on it. And I should consider the following problem. When pci drivers enabled/disabled devices dynamically, its irq number is changed to the different one. Therefore, suspend/resume code may happen problem. To fix this problem, I bound gsi to irq. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Check if irq is sharableYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Need to check if irq is sharable amoung handlers when searching sharable IOSAPIC irq. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Fix invalid irq vector assumption for iosapicYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many of IOSAPIC codes depends on the flollowing assumptions, but these would become invalid when multiple vector domain will be supported in the future. - 1:1 mapping between IRQ and vector - IRQ == vector To fix those invalid assumptions, this patch changes iosapic_intr_info[] to be indexed by irq number instead of vector. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Use dynamic irq for iosapic interruptsYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use create_irq()/destroy_irq() for iosapic interrupts. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Use per iosapic lock for indirect iosapic register accessYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use per-iosapic lock for indirect iosapic register access. It reduces lock contention. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Cleanup lock order in iosapic_register_intrYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cleanup order of irq_desc.lock and iosapic_lock in iosapic_register_intr() and iosapic_unregister_intr(). Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Remove duplicated members in iosapic_rte_infoYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove duplicated members in iosapic_rte_info in iosapic.c. This patch has no functional changes. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] Remove block structure for locking in iosapic.cYasuaki Ishimatsu2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove unnecessary indent between spin_lock() and spin_unlock() in iosapic.c. This has no functional changes. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | [IA64] refresh some config files for 64K pagesizeTony Luck2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Update arch/ia64/defconfig: select 64K pagesize Same for arch/ia64/configs/tiger_defconfig + CONFIG_COMPAT=n Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | [IA64] fallocate system callDavid Chinner2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys_fallocate for ia64. This uses an empty slot #1303 erroneously marked as reserved for move_pages (which had already been allocated as syscall #1276) Signed-Off-By: Dave Chinner <dgc@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | | mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | [PATCH] sched: sched_cacheflush is now unusedRalf Baechle2007-07-19
|/ / / | | | | | | | | | | | | | | | | | | | | | Since Ingo's recent scheduler rewrite which was merged as commit 0437e109e1841607f2988891eaa36c531c6aa6ac sched_cacheflush is unused. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | mm: variable length argument supportOllie Wild2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the arg+env limit of MAX_ARG_PAGES by copying the strings directly from the old mm into the new mm. We create the new mm before the binfmt code runs, and place the new stack at the very top of the address space. Once the binfmt code runs and figures out where the stack should be, we move it downwards. It is a bit peculiar in that we have one task with two mm's, one of which is inactive. [a.p.zijlstra@chello.nl: limit stack size] Signed-off-by: Ollie Wild <aaw@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <linux-arch@vger.kernel.org> Cc: Hugh Dickins <hugh@veritas.com> [bunk@stusta.de: unexport bprm_mm_init] Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | use the new percpu interface for shared dataFenghua Yu2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently most of the per cpu data, which is accessed by different cpus, has a ____cacheline_aligned_in_smp attribute. Move all this data to the new per cpu shared data section: .data.percpu.shared_aligned. This will seperate the percpu data which is referenced frequently by other cpus from the local only percpu data. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | define new percpu interface for shared dataFenghua Yu2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | per cpu data section contains two types of data. One set which is exclusively accessed by the local cpu and the other set which is per cpu, but also shared by remote cpus. In the current kernel, these two sets are not clearely separated out. This can potentially cause the same data cacheline shared between the two sets of data, which will result in unnecessary bouncing of the cacheline between cpus. One way to fix the problem is to cacheline align the remotely accessed per cpu data, both at the beginning and at the end. Because of the padding at both ends, this will likely cause some memory wastage and also the interface to achieve this is not clean. This patch: Moves the remotely accessed per cpu data (which is currently marked as ____cacheline_aligned_in_smp) into a different section, where all the data elements are cacheline aligned. And as such, this differentiates the local only data and remotely accessed data cleanly. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | jprobes: make jprobes a little safer for usersMichael Ellerman2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I realise jprobes are a razor-blades-included type of interface, but that doesn't mean we can't try and make them safer to use. This guy I know once wrote code like this: struct jprobe jp = { .kp.symbol_name = "foo", .entry = "jprobe_foo" }; And then his kernel exploded. Oops. This patch adds an arch hook, arch_deref_entry_point() (I don't like it either) which takes the void * in a struct jprobe, and gives back the text address that it represents. We can then use that in register_jprobe() to check that the entry point we're passed is actually in the kernel text, rather than just some random value. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | mm: fault feedback #2Nick Piggin2007-07-19
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [akpm@linux-foundation.org: fix alpha build] [akpm@linux-foundation.org: fix s390 build] [akpm@linux-foundation.org: fix sparc build] [akpm@linux-foundation.org: fix sparc64 build] [akpm@linux-foundation.org: fix ia64 build] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Bryan Wu <bryan.wu@analog.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'release' of ↵Linus Torvalds2007-07-17
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Clean away some code inside some non-existent CONFIG ifdefs [IA64] ar.itc access must really be after xtime_lock.sequence has been read [IA64] correctly count CPU objects in the ia64/sn hwperf interface [IA64] arbitary speed tty ioctl support [IA64] use machvec=dig on hpzx1 platforms
| * | [IA64] Clean away some code inside some non-existent CONFIG ifdefsTony Luck2007-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Robert P.J. Day has a script that finds places in the code that use non-existent CONFIG variables. It complained of two uses in ia64 specific code: CONFIG_IA64_SDV and CONFIG_KDB (both used in the hp/sim code). Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] ar.itc access must really be after xtime_lock.sequence has been readHidetoshi Seto2007-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ".acq" semantics of the load only apply w.r.t. other data access. Reading the clock (ar.itc) isn't a data access so strange things can happen here. Specifically the read of ar.itc can be launched as soon as the read of xtime_lock.sequence is ISSUED. Since this may cache miss, and that might cause a thread switch, and there may be cache contention for the line containing xtime_lock, it may be a long time before the actual value is returned, so the ar.itc value may be very stale. Move the consumption of r28 up before the read of ar.itc to make sure that we really have got the current value of xtime_lock.sequence before look at ar.itc. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] correctly count CPU objects in the ia64/sn hwperf interfaceMark Goodwin2007-07-13
| | | | | | | | | | | | | | | | | | | | | | | | Correctly count CPU objects for SGI ia64/sn hwperf interface Signed-off-by: Mark Goodwin <markgw@sgi.com> Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] use machvec=dig on hpzx1 platformsTerry Loftin2007-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On HP zx1 machines, the 'machvec=dig' parameter is needed for the kdump kernel to avoid problems with the HP sba iommu. The problem is that during the boot of the kdump kernel, the iommu is re-initialized, so in-flight DMA from improperly shutdown drivers causes an IOTLB miss which leads to an MCA. With kdump, the idea is to get into the kdump kernel with as little code as we can, so shutting down drivers properly is not an option. The workaround is to add 'machvec=dig' to the kdump kernel boot parameters. This makes the kdump kernel avoid using the sba iommu altogether, leaving the IOTLB intact. Any ongoing DMA falls harmlessly outside the kdump kernel. After the kdump kernel reboots, all devices will have been shutdown properly and DMA stopped. This patch pushes that functionality into the sba iommu initialization code, so that users won't have to find the obscure documentation telling them about 'machvec=dig'. This patch only affects HP platforms. It still includes one extern declaration in the file, because no applicable header file exists. Signed-off-by: Terry Loftin <terry.loftin@hp.com> Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | missing exports of csum_...Al Viro2007-07-17
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Kprobes on select architectures no longer EXPERIMENTALAnanth N Mavinakayanahalli2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on usage and testing over the past couple of years, kprobes on i386, ia64, powerpc and x86_64 is no longer EXPERIMENTAL. This is a follow-up to Robert P.J. Day's patch making "Instrumentation support" non-EXPERIMENTAL: http://marc.info/?l=linux-kernel&m=118396955423812&w=2 Arch maintainers for sparc64, avr32 and s390 need to take a similar call. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Andi Kleen <ak@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Report that kernel is tainted if there was an OOPSPavel Emelianov2007-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the kernel OOPSed or BUGed then it probably should be considered as tainted. Thus, all subsequent OOPSes and SysRq dumps will report the tainted kernel. This saves a lot of time explaining oddities in the calltraces. Signed-off-by: Pavel Emelianov <xemul@openvz.org> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Added parisc patch from Matthew Wilson -Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | handle kernelcore=: genericMel Gorman2007-07-17
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the kernelcore= parameter for x86. Once all patches are applied, a new command-line parameter exist and a new sysctl. This patch adds the necessary documentation. From: Yasunori Goto <y-goto@jp.fujitsu.com> When "kernelcore" boot option is specified, kernel can't boot up on ia64 because of an infinite loop. In addition, the parsing code can be handled in an architecture-independent manner. This patch uses common code to handle the kernelcore= parameter. It is only available to architectures that support arch-independent zone-sizing (i.e. define CONFIG_ARCH_POPULATES_NODE_MAP). Other architectures will ignore the boot parameter. [bunk@stusta.de: make cmdline_parse_kernelcore() static] Signed-off-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Acked-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | diskquota: 32bit quota tools on 64bit architecturesVasily Tarasov2007-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OpenVZ Linux kernel team has discovered the problem with 32bit quota tools working on 64bit architectures. In 2.6.10 kernel sys32_quotactl() function was replaced by sys_quotactl() with the comment "sys_quotactl seems to be 32/64bit clean, enable it for 32bit" However this isn't right. Look at if_dqblk structure: struct if_dqblk { __u64 dqb_bhardlimit; __u64 dqb_bsoftlimit; __u64 dqb_curspace; __u64 dqb_ihardlimit; __u64 dqb_isoftlimit; __u64 dqb_curinodes; __u64 dqb_btime; __u64 dqb_itime; __u32 dqb_valid; }; For 32 bit quota tools sizeof(if_dqblk) == 0x44. But for 64 bit kernel its size is 0x48, 'cause of alignment! Thus we got a problem. Attached patch reintroduce sys32_quotactl() function, that handles this and related situations. [michal.k.k.piotrowski@gmail.com: build fix] [akpm@linux-foundation.org: Make it link with CONFIG_QUOTA=n] Signed-off-by: Vasily Tarasov <vtaras@openvz.org> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Jan Kara <jack@ucw.cz> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Michal Piotrowski <michal.k.k.piotrowski@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | PIE randomizationJan Kratochvil2007-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is using mmap()'s randomization functionality in such a way that it maps the main executable of (specially compiled/linked -pie/-fpie) ET_DYN binaries onto a random address (in cases in which mmap() is allowed to perform a randomization). Origin of this patch is in exec-shield (http://people.redhat.com/mingo/exec-shield/) [jkosina@suse.cz: pie randomization: fix BAD_ADDR macro] Signed-off-by: Jan Kratochvil <honza@jikos.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Cc: Ingo Molnar <mingo@elte.hu> Cc: Roland McGrath <roland@redhat.com> Cc: Jakub Jelinek <jakub@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | serial: convert early_uart to earlycon for 8250Yinghai Lu2007-07-16
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Beacuse SERIAL_PORT_DFNS is removed from include/asm-i386/serial.h and include/asm-x86_64/serial.h. the serial8250_ports need to be probed late in serial initializing stage. the console_init=>serial8250_console_init=> register_console=>serial8250_console_setup will return -ENDEV, and console ttyS0 can not be enabled at that time. need to wait till uart_add_one_port in drivers/serial/serial_core.c to call register_console to get console ttyS0. that is too late. Make early_uart to use early_param, so uart console can be used earlier. Make it to be bootconsole with CON_BOOT flag, so can use console handover feature. and it will switch to corresponding normal serial console automatically. new command line will be: console=uart8250,io,0x3f8,9600n8 console=uart8250,mmio,0xff5e0000,115200n8 or earlycon=uart8250,io,0x3f8,9600n8 earlycon=uart8250,mmio,0xff5e0000,115200n8 it will print in very early stage: Early serial console at I/O port 0x3f8 (options '9600n8') console [uart0] enabled later for console it will print: console handover: boot [uart0] -> real [ttyS0] Signed-off-by: <yinghai.lu@sun.com> Cc: Andi Kleen <ak@suse.de> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Gerd Hoffmann <kraxel@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'release' of ↵Linus Torvalds2007-07-12
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Support multiple CPUs going through OS_MCA [IA64] silence GCC ia64 unused variable warnings [IA64] prevent MCA when performing MMIO mmap to PCI config space [IA64] add sn_register_pmi_handler oemcall [IA64] Stop bit for brl instruction [IA64] SN: Correct ROM resource length for BIOS copy [IA64] Don't set psr.ic and psr.i simultaneously
| * [IA64] Support multiple CPUs going through OS_MCARuss Anderson2007-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Linux does not gracefully deal with multiple processors going through OS_MCA aa part of the same MCA event. The first cpu into OS_MCA grabs the ia64_mca_serialize lock. Subsequent cpus wait for that lock, preventing them from reporting in as rendezvoused. The first cpu waits 5 seconds then complains that all the cpus have not rendezvoused. The first cpu then handles its MCA and frees up all the rendezvoused cpus and releases the ia64_mca_serialize lock. One of the subsequent cpus going thought OS_MCA then gets the ia64_mca_serialize lock, waits another 5 seconds and then complains that none of the other cpus have rendezvoused. This patch allows multiple CPUs to gracefully go through OS_MCA. The first CPU into ia64_mca_handler() grabs a mca_count lock. Subsequent CPUs into ia64_mca_handler() are added to a list of cpus that need to go through OS_MCA (a bit set in mca_cpu), and report in as rendezvoused, and but spin waiting their turn. The first CPU sees everyone rendezvous, handles his MCA, wakes up one of the other CPUs waiting to process their MCA (by clearing one mca_cpu bit), and then waits for the other cpus to complete their MCA handling. The next CPU handles his MCA and the process repeats until all the CPUs have handled their MCA. When the last CPU has handled it's MCA, it sets monarch_cpu to -1, releasing all the CPUs. In testing this works more reliably and faster. Thanks to Keith Owens for suggesting numerous improvements to this code. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] silence GCC ia64 unused variable warningsJes Sorensen2007-07-11
| | | | | | | | | | | | | | | | Tell GCC to stop spewing out unnecessary warnings for unused variables passed to functions as pointers for ia64 files. Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] prevent MCA when performing MMIO mmap to PCI config spaceAlex Chiang2007-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Example memory map (HP rx7640 with 'default' acpiconfig setting, VGA disabled): 0x00000000 - 0x3FFFBFFF supports only WB (cacheable) access If a user attempts to perform an MMIO mmap (using the PCIIOC_MMAP_IS_MEM ioctl) to PCI config space (like mmap'ing and accessing memory at 0xA0000), we will MCA because the kernel will attempt to use a mapping with the UC attribute. So check the memory attribute in kern_mmap and the EFI memmap. If WC is requested, and WC or UC access is supported for the region, allow it. Otherwise, use the same attribute the kernel uses. Updates documentation and test cases as well. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Stop bit for brl instructionChristian Kandeler2007-07-09
| | | | | | | | | | | | | | | | SDM says that brl instruction must be followed by a stop bit. Fix instance in BRL_COND_FSYS_BUBBLE_DOWN where it isn't. Signed-off-by: Christian Kandeler <christian.kandeler@hob.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] SN: Correct ROM resource length for BIOS copyJohn Keller2007-07-09
| | | | | | | | | | | | | | | | | | | | On SN systems, when setting the IORESOURCE_ROM_BIOS_COPY resource flag, the resource length should be set to the actual size of the ROM image so that a call to pci_map_rom() returns the correct size. Signed-off-by: John Keller <jpk@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] Don't set psr.ic and psr.i simultaneouslyTony Luck2007-07-09
| | | | | | | | | | | | | | | | | | It's not a good idea to use "ssm psr.ic | psr.i" to simultaneously enable interrupts and interrupt state collection, the two bits can take effect asynchronously, so it is possible for an interrupt to be serviced while psr.ic is still zero. Signed-off-by: Tony Luck <tony.luck@intel.com>
* | PCI: Only build PCI syscalls on architectures that want themMatthew Wilcox2007-07-11
| | | | | | | | | | | | | | | | | | | | | | The PCI syscalls are built on every architecture except X86, but only a few have ever hooked them up. Use a new Kconfig symbol to save a couple of kB on the architectures that have never used the syscalls. Tested on x86 and ia64 only. Signed-off-by: Matthew Wilcox <matthew@wil.cx> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* | sched: zap the migration init / cache-hot balancing codeIngo Molnar2007-07-09
|/ | | | | | | | | | | | | | | | | | | | | | the SMP load-balancer uses the boot-time migration-cost estimation code to attempt to improve the quality of balancing. The reason for this code is that the discrete priority queues do not preserve the order of scheduling accurately, so the load-balancer skips tasks that were running on a CPU 'recently'. this code is fundamental fragile: the boot-time migration cost detector doesnt really work on systems that had large L3 caches, it caused boot delays on large systems and the whole cache-hot concept made the balancing code pretty undeterministic as well. (and hey, i wrote most of it, so i can say it out loud that it sucks ;-) under CFS the same purpose of cache affinity can be achieved without any special cache-hot special-case: tasks are sorted in the 'timeline' tree and the SMP balancer picks tasks from the left side of the tree, thus the most cache-cold task is balanced automatically. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [IA64] Make SN2 PCI code use ioremap rather than manually mangle the addressJes Sorensen2007-06-26
| | | | | | | | | | | | This one changes the SN2 specific PCI drivers to use ioremap() for obtaining the real address to access for the PCI registers instead of manually calculating them with __IA64_UNCACHED_OFFSET. The patch should have no real change when running on a normal Linux kernel, but when running as a paravirtualized it is needed. Signed-off-by: Jes Sorenson <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Force error to surface in nofault codeRuss Anderson2007-06-26
| | | | | | | | | | Montecito behaves slightly differently than previous processors, resulting in the MCA due to a failed PIO read to sometimes surfacing outside the nofault code. Adding an additional or and stop bits ensures the MCA surfaces in the nofault code. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>