aboutsummaryrefslogtreecommitdiffstats
path: root/arch/ia64/kernel
Commit message (Collapse)AuthorAge
* [IA64] Shrink shadow_flush_counts to a short array to save 8k of per_cpu area.Robin Holt2008-08-18
| | | | | | | | | | | | Making allmodconfig will break the current build. This patch shrinks the per_cpu__shadow_flush_counts from 16k to 8k which frees enough space to allow allmodconfig to successfully complete. Fixes http://bugzilla.kernel.org/show_bug.cgi?id=11338 Signed-off-by: Robin Holt <holt@sgi.com> Acked-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Ensure cpu0 can access per-cpu variables in early boot codeTony Luck2008-08-12
| | | | | | | | | | | | | | | | | | | | ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Cleanup generated file not ignored by .gitignoreRobin Holt2008-08-04
| | | | | | | | arch/ia64/kernel/vmlinux.lds is a generated file. Tell git to ignore it. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] pv_ops: fix ivt.S paravirtualizationIsaku Yamahata2008-08-04
| | | | | | | | | | | | | | | | | Recent kernels are not booting on some HP systems (though it does boot on others). James and Willy reported the problem. James did the bisection to find the commit that caused the problem: 498c5170472ff0c03a29d22dbd33225a0be038f4. [IA64] pvops: paravirtualize ivt.S Two instructions were wrongly paravirtualized such that _FROM_ macro had been used where _TO_ was intended Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "Wilcox, Matthew R" <matthew.r.wilcox@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Move include/asm-ia64 to arch/ia64/include/asmTony Luck2008-08-01
| | | | | | | | | | | | | | | After moving the the include files there were a few clean-ups: 1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h> 2) Some comments alerted maintainers to look at various header files to make matching updates if certain code were to be changed. Updated these comments to use the new include paths. 3) Some header files mentioned their own names in initial comments. Just deleted these self references. Signed-off-by: Tony Luck <tony.luck@intel.com>
* tracehook: wait_task_inactiveRoland McGrath2008-07-26
| | | | | | | | | | | | | | This extends wait_task_inactive() with a new argument so it can be used in a "soft" mode where it will check for the task changing state unexpectedly and back off. There is no change to existing callers. This lays the groundwork to allow robust, noninvasive tracing that can try to sample a blocked thread but back off safely if it wakes up. Signed-off-by: Roland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Reviewed-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'release' of ↵Linus Torvalds2008-07-25
|\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Wire up new system calls
| * [IA64] Wire up new system callsTony Luck2008-07-25
| | | | | | | | | | | | | | Six new system calls: signalfd4, eventfd2, epoll_create1, dup3, pipe2 and inotify_init1. Signed-off-by: Tony Luck <tony.luck@intel.com>
* | kprobes: improve kretprobe scalability with hashed lockingSrinivasa D S2008-07-25
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently list of kretprobe instances are stored in kretprobe object (as used_instances,free_instances) and in kretprobe hash table. We have one global kretprobe lock to serialise the access to these lists. This causes only one kretprobe handler to execute at a time. Hence affects system performance, particularly on SMP systems and when return probe is set on lot of functions (like on all systemcalls). Solution proposed here gives fine-grain locks that performs better on SMP system compared to present kretprobe implementation. Solution: 1) Instead of having one global lock to protect kretprobe instances present in kretprobe object and kretprobe hash table. We will have two locks, one lock for protecting kretprobe hash table and another lock for kretporbe object. 2) We hold lock present in kretprobe object while we modify kretprobe instance in kretprobe object and we hold per-hash-list lock while modifying kretprobe instances present in that hash list. To prevent deadlock, we never grab a per-hash-list lock while holding a kretprobe lock. 3) We can remove used_instances from struct kretprobe, as we can track used instances of kretprobe instances using kretprobe hash table. Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system with return probes set on all systemcalls looks like this. cacheline non-cacheline Un-patched kernel aligned patch aligned patch =============================================================================== real 9m46.784s 9m54.412s 10m2.450s user 40m5.715s 40m7.142s 40m4.273s sys 2m57.754s 2m58.583s 3m17.430s =========================================================== Time duration for kernel compilation ("make -j 8) on the same system, when kernel is not probed. ========================= real 9m26.389s user 40m8.775s sys 2m7.283s ========================= Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com> Signed-off-by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* flag parameters: pipeUlrich Drepper2008-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the new syscall pipe2 which is like pipe but it also takes an additional parameter which takes a flag value. This patch implements the handling of O_CLOEXEC for the flag. I did not add support for the new syscall for the architectures which have a special sys_pipe implementation. I think the maintainers of those archs have the chance to go with the unified implementation but that's up to them. The implementation introduces do_pipe_flags. I did that instead of changing all callers of do_pipe because some of the callers are written in assembler. I would probably screw up changing the assembly code. To avoid breaking code do_pipe is now a small wrapper around do_pipe_flags. Once all callers are changed over to do_pipe_flags the old do_pipe function can be removed. The following test must be adjusted for architectures other than x86 and x86-64 and in case the syscall numbers changed. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <sys/syscall.h> #ifndef __NR_pipe2 # ifdef __x86_64__ # define __NR_pipe2 293 # elif defined __i386__ # define __NR_pipe2 331 # else # error "need __NR_pipe2" # endif #endif int main (void) { int fd[2]; if (syscall (__NR_pipe2, fd, 0) != 0) { puts ("pipe2(0) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if (coe & FD_CLOEXEC) { printf ("pipe2(0) set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); if (syscall (__NR_pipe2, fd, O_CLOEXEC) != 0) { puts ("pipe2(O_CLOEXEC) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if ((coe & FD_CLOEXEC) == 0) { printf ("pipe2(O_CLOEXEC) does not set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); puts ("OK"); return 0; } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Ulrich Drepper <drepper@redhat.com> Acked-by: Davide Libenzi <davidel@xmailserver.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen2008-07-22
| | | | | | | | | | | | | | | | | | | | | | This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* [IA64] Avoid overflowing ia64_cpu_to_sapicid in acpi_map_lsapic()Alex Chiang2008-07-17
| | | | | | | | | | acpi_map_lsapic tries to stuff a long into ia64_cpu_to_sapicid[], which can only hold ints, so let's fix that. We need to update the signature of acpi_map_cpu2node() too. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] adding parameter check to module_free()Akiyama, Nobuyuki2008-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | module_free() refers the first parameter before checking. But it is called like below(in kernel/kprobes). The first parameter is always NULL. This happens when many probe points(>1024) are set by kprobes. I encountered this with using SystemTap. It can set many probes easily. static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) { ... if (kip->nused == 0) { hlist_del(&kip->hlist); if (hlist_empty(&kprobe_insn_pages)) { ... } else { module_free(NULL, kip->insns); //<<< 1st param always NULL kfree(kip); } return 1; } return 0; } Signed-off-by: Akiyama, Nobuyuki <akiyama.nobuyuk@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] improper printk format in acpi-cpufreqDenis V. Lunev2008-07-17
| | | | | | | | | | | | | When dprintk is enabled the following warnings are generated: arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_set_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:54: warning: format '%x' expects type 'unsigned int', but argumen t 3 has type 's64' arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_get_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:76: warning: format '%x' expects type 'unsigned int', but argumen t 2 has type 's64' Signed-off-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull pvops into release branchTony Luck2008-07-17
|\
| * [IA64] pv_ops: move some functions in ivt.S to avoid lack of space.Isaku Yamahata2008-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | move interrupt, page_fault, non_syscall, dispatch_unaligned_handler and dispatch_to_fault_handler to avoid lack of instructin space. The change set 4dcc29e1574d88f4465ba865ed82800032f76418 bloated SAVE_MIN_WITH_COVER, SAVE_MIN_WITH_COVER_R19 so that it bloated the functions which uses those macros. In the native case, only dispatch_illegal_op_fault had to be moved. When paravirtualized case the all functions which use the macros need to be moved to avoid the lack of space. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add to hooks, pv_time_ops, for steal time accounting.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | Introduce pv_time_ops which adds hook to steal time accounting. On virtualized environment, cpus are shared by many guests and steal time is the time which is used for other guests. On virtualized environtment, streal time should be accounted. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add hooks, pv_irq_ops, to paravirtualized irq related operations.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | | introduce pv_irq_ops which adds hooks to paravirtualize irq related operations. On virtualized environment, interruption may be replaced by something virtualization friendly. So the irq related operation also may need paravirtualization. This patch adds necessary hooks to paravirtualize irq related operations. Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add hooks, pv_iosapic_ops, to paravirtualize iosapic.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | add hooks to paravirtualize iosapic which is a real hardware resource. On virtualized environment it may be replaced something virtualized friendly. Define pv_iosapic_ops and add the hooks. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: define initialization hooks, pv_init_ops, for paravirtualized ↵Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | environment. define pv_init_ops hooks which represents various initialization hooks for paravirtualized environment. and add hooks. Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize NR_IRQSIsaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | Make NR_IRQ overridable by each pv instances. Pv instance may need each own number of irqs so that NR_IRQS should be the maximum number of nr_irqs each pv instances need. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize entry.SIsaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | | paravirtualize ia64_swtich_to, ia64_leave_syscall and ia64_leave_kernel. They include sensitive or performance critical privileged instructions so that they need paravirtualization. To paravirtualize them by single source and multi compile they are converted into indirect jump. And define each pv instances. Cc: Keith Owens <kaos@ocs.com.au> Cc: "Dong, Eddie" <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize ivt.SIsaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | paravirtualize ivt.S which implements fault handler in hand written assembly code. They includes sensitive or performance critical privileged instructions. So they need paravirtualization. Cc: Keith Owens <kaos@ocs.com.au> Cc: tgingold@free.fr Cc: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize minstate.h.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | paravirtualize minstate.h which are hand written assembly code. They include sensitive or performance critical privileged instructions. So that they are appropriate for paravirtualization. Cc: Keith Owens <kaos@ocs.com.au> Cc: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation for paravirtulization of hand written assembly code.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | Preparation for paravirtualization of hand written assembly code. They are paravirtualized by single source code and compiled multi times. To tell those files for target (including native), add one defines. Cc: "Dong, Eddie" <eddie.dong@intel.com> Cc: Keith Owens <kaos@ocs.com.au> Cc: tgingold@free.fr Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: introduce pv_cpu_ops to paravirtualize privileged instructions.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | introduce pv_cpu_ops to paravirtualize privleged instructions which are defined by ia64 intrinsics. make them indirect C function calls by introducing function tables, pv_cpu_ops. Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add an early setup hook for pv_ops.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | This patch adds a setup hook in the very early boot sequence before start_kernel() to initialize paravirtualization stuff. The hook will be set by each pv loader code or by using multi entry point. Signed-off-by: Qing He <qing.he@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: introduce pv_info which describes some random info.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | introduce pv_info which describes some randome info about underlying execution environment. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation: move the constants, LOAD_OFFSET, to a header file.Isaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h. On paravirtualized environments, it is necessary to detect the execution environment. One of the solutions is the multi entry point. The multi entry point allows a boot loader to start the kernel execution from the entry point which is different from the ELF entry point. The non standard entry point will defined as the specialized elf note which contains the LMA of the entry point symbol. The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA. Move the definition into the public header file to make it available to the multi entry point support. Cc: "He, Qing" <qing.he@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation: remove extern in irq_ia64.cIsaku Yamahata2008-05-27
| | | | | | | | | | | | | | | | | | remove extern declaration of handle_IPI() in irq_ia64.c. Instead, declare it in asm-ia64/smp.h. Later handle_IPI() will be referenced from another file. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | ACPI : Create "idle=nomwait" bootparamZhao Yakui2008-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "idle=nomwait" disables the use of the MWAIT instruction from both C1 (C1_FFH) and deeper (C2C3_FFH) C-states. When MWAIT is unavailable, the BIOS and OS generally negotiate to use the HALT instruction for C1, and use IO accesses for deeper C-states. This option is useful for power and performance comparisons, and also to work around BIOS bugs where broken MWAIT support is advertised. http://bugzilla.kernel.org/show_bug.cgi?id=10807 http://bugzilla.kernel.org/show_bug.cgi?id=10914 Signed-off-by: Zhao Yakui <yakui.zhao@intel.com> Signed-off-by: Li Shaohua <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
* | ACPI: Create "idle=halt" bootparamZhao Yakui2008-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | "idle=halt" limits the idle loop to using the halt instruction. No MWAIT, no IO accesses, no C-states deeper than C1. If something is broken in the idle code, "idle=halt" is a less severe workaround than "idle=poll" which disables all power savings. Signed-off-by: Zhao Yakui <yakui.zhao@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
* | Merge branch 'generic-ipi' into generic-ipi-for-linusIngo Molnar2008-07-15
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/powerpc/Kconfig arch/s390/kernel/time.c arch/x86/kernel/apic_32.c arch/x86/kernel/cpu/perfctr-watchdog.c arch/x86/kernel/i8259_64.c arch/x86/kernel/ldt.c arch/x86/kernel/nmi_64.c arch/x86/kernel/smpboot.c arch/x86/xen/smp.c include/asm-x86/hw_irq_32.h include/asm-x86/hw_irq_64.h include/asm-x86/mach-default/irq_vectors.h include/asm-x86/mach-voyager/irq_vectors.h include/asm-x86/smp.h kernel/Makefile Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | on_each_cpu(): kill unused 'retry' parameterJens Axboe2008-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe2008-06-26
| | | | | | | | | | | | | | | | | | | | | | | | It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | ia64: convert to generic helpers for IPI function callsJens Axboe2008-06-26
| | | | | | | | | | | | | | | | | | | | | | | | This converts ia64 to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | | [IA64] export account_system_vtimeDoug Chapman2008-06-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The symbol account_system_vtime is used by the kvm module but not exported. This breaks building with CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_KVM=m. Signed-off-by: Doug Chapman <doug.chapman@hp.com> Acked-by: Hidetosho Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | [IA64] Bugfix for system with 32 cpusTony Luck2008-06-30
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a system where there are no hot pluggable cpus "additional_cpus" is still set to -1 at the point where we call per_cpu_scan_finalize(). If we didn't find an SRAT table and so pick the default "32" for the number of cpus, when we get to: high_cpu = min(high_cpu + reserve_cpus, NR_CPUS); we will end up initializing for just 31 cpus ... and so we will die horribly when bringing up cpu#32. Problem introduced by: 2c6e6db41f01b6b4eb98809350827c9678996698 "Minimize per_cpu reservations." Acked-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Eliminate NULL test after alloc_bootmem in iosapic_alloc_rte()Julia Lawall2008-06-24
| | | | | | | | | | | | | | | | | | As noted by Akinobu Mita alloc_bootmem and related functions never return NULL and always return a zeroed region of memory. Thus a NULL test or memset after calls to these functions is unnecessary. Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Fix boot failure on ia64/sn2Jes Sorensen2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | Call check_sal_cache_flush() after platform_setup() as check_sal_cache_flush() now relies on being able to call platform vector code. Problem was introduced by: 3463a93def55c309f3c0d0a8aaf216be3be42d64 "Update check_sal_cache_flush to use platform_send_ipi()" Signed-off-by: Jes Sorensen <jes@sgi.com> Tested-by: Alex Chiang: <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | Merge branch 'release' of ↵Linus Torvalds2008-06-16
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Fix CONFIG_IA64_SGI_UV build error [IA64] Update check_sal_cache_flush to use platform_send_ipi() [IA64] perfmon: fix async exit bug
| * | [IA64] Update check_sal_cache_flush to use platform_send_ipi()Alex Chiang2008-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check_sal_cache_flush is used to detect broken firmware that drops pending interrupts. The old implementation schedules a timer interrupt for itself in the future by getting the current value of the Interval Timer Counter + 1000 cycles, waits for the interrupt to be pended, calls SAL_CACHE_FLUSH, and finally checks to see if the interrupt is still pending. This implementation can cause problems for virtual machine code if the process of scheduling the timer interrupt takes more than 1000 cycles; the virtual machine can end up sleeping for several hundred years while waiting for the ITC to wrap around. The fix is to use platform_send_ipi. The processor will still send an interrupt to itself, using the IA64_IPI_DM_INT delivery mode, which causes the IPI to look like an external interrupt. The rest of the SAL_CACHE_FLUSH + checking to see if the interrupt is still pending remains unchanged. This fix has been boot tested successfully on: - intel tiger2 - hp rx6600 - hp rx5670 The rx5670 has known buggy firmware, where SAL_CACHE_FLUSH drops pending interrupts. A boot test on this machine showed this message on the console: SAL: SAL_CACHE_FLUSH drops interrupts; PAL_CACHE_FLUSH will be used instead Which proves that the self-inflicted IPI approach is viable. And as expected, the other tested platforms correctly did not display the warning. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] perfmon: fix async exit bugstephane eranian2008-06-11
| |/ | | | | | | | | | | | | | | | | Move the cleanup of the async queue to the close callback from the flush callback. This avoids losing asynchronous overflow notifications when the file descriptor is shared by multiple processes and one terminates. Signed-off-by: Stephane Eranian <eranian@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* / ACPI: handle invalid ACPI SLIT tableFenghua Yu2008-06-11
|/ | | | | | | | | | | | This is a SLIT sanity checking patch. It moves slit_valid() function to generic ACPI code and does sanity checking for both x86 and ia64. It sets up node_distance with LOCAL_DISTANCE and REMOTE_DISTANCE when hitting invalid SLIT table on ia64. It also cleans up unused variable localities in acpi_parse_slit() on x86. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com>
* [IA64] Workaround for RSE issueTony Luck2008-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem: An application violating the architectural rules regarding operation dependencies and having specific Register Stack Engine (RSE) state at the time of the violation, may result in an illegal operation fault and invalid RSE state. Such faults may initiate a cascade of repeated illegal operation faults within OS interruption handlers. The specific behavior is OS dependent. Implication: An application causing an illegal operation fault with specific RSE state may result in a series of illegal operation faults and an eventual OS stack overflow condition. Workaround: OS interruption handlers that switch to kernel backing store implement a check for invalid RSE state to avoid the series of illegal operation faults. The core of the workaround is the RSE_WORKAROUND code sequence inserted into each invocation of the SAVE_MIN_WITH_COVER and SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded constants that depend on the number of stacked physical registers being 96. The rest of this patch consists of code to disable this workaround should this not be the case (with the presumption that if a future Itanium processor increases the number of registers, it would also remove the need for this patch). Move the start of the RBS up to a mod32 boundary to avoid some corner cases. The dispatch_illegal_op_fault code outgrew the spot it was squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y Move it out to the end of the ivt. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] take init_files to fs/file.cAl Viro2008-05-16
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [IA64] Properly unregister legacy interruptsPrarit Bhargava2008-05-14
| | | | | | | | | | | | acpi_unregister_gsi() should "undo" what acpi_register_gsi() does. On systems that have legacy interrupts, acpi_unregister_gsi erroneously calls iosapci_unregister_intr() which is wrong to do and causes a loud warning. acpi_unregister_gsi() should just return in these cases. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Remove NULL pointer check for argument never passed as NULL.Simon Holm Thøgersen2008-05-14
| | | | | | | | | There is only palinfo_handle_smp as (indirect) user of palinfo_smp_call (by way of smp_call_function_single) and surely palinfo_handle_smp never pass NULL as parameter for info. Signed-off-by: Simon Holm Thøgersen <odie@cs.aau.dk> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] trivial cleanup for perfmon.cHidetoshi Seto2008-05-14
| | | | | | | Fix a typo, and coding style cleanups for pfm_handle_work(). Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] trivial cleanup for entry.SHidetoshi Seto2008-05-14
| | | | | | | | | This patch does: - make comment at next to resched check more robust - move "re-check" comments to next to where change predicate regs Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>