aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel
Commit message (Collapse)AuthorAge
* Merge branch 'cpus4096' into irq/threadedThomas Gleixner2009-03-23
|\ | | | | | | | | | | | | | | Conflicts: arch/parisc/kernel/irq.c kernel/irq/handle.c Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86: cpumask: update 32-bit APM not to mug current->cpus_allowedRusty Russell2009-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, avoid cpumask games The APM code wants to run on CPU 0: we create an "on_cpu0" wrapper which uses work_on_cpu() if we're not already on cpu 0. This introduces a new failure mode: -ENOMEM, so we add an explicit err arg and handle Linux-style errnos in apm_err(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> LKML-Reference: <200903111631.29787.rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: microcode: cleanupIngo Molnar2009-03-18
| | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dmitry Adamushko <dmitry.adamushko@gmail.com> Cc: Peter Oruba <peter.oruba@amd.com> LKML-Reference: <200903111632.37279.rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: cpumask: use work_on_cpu in arch/x86/kernel/microcode_core.cRusty Russell2009-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: don't play with current's cpumask Straightforward indirection through work_on_cpu(). One change is that the error code from microcode_update_cpu() is now actually plumbed back to microcode_init_cpu(), so now we printk if it fails on cpu hotplug. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dmitry Adamushko <dmitry.adamushko@gmail.com> Cc: Peter Oruba <peter.oruba@amd.com> LKML-Reference: <200903111632.37279.rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * cpumask: fix CONFIG_CPUMASK_OFFSTACK=y cpu hotunplug crashRusty Russell2009-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Fix cpu offline when CONFIG_MAXSMP=y Changeset bc9b83dd1f66402b870301c3c7117b9c1484abb4 "cpumask: convert c1e_mask in arch/x86/kernel/process.c to cpumask_var_t" contained a bug: c1e_mask is manipulated even if C1E isn't detected (and hence not allocated). This is simply fixed by checking for NULL (which gcc optimizes out anyway of CONFIG_CPUMASK_OFFSTACK=n, since it knows ce1_mask can never be NULL). In addition, fix a leak where select_idle_routine re-allocates (and re-clears) c1e_mask on every cpu init. Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Mike Travis <travis@sgi.com> LKML-Reference: <200903171450.34549.rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * cpumask: remove x86 cpumask_t uses.Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We are removing cpumask_t in favour of struct cpumask: mainly as a marker of what code is now CONFIG_CPUMASK_OFFSTACK-safe. The only non-trivial change here is vector_allocation_domain(): explicitly clear the mask and set the first word, rather than using assignment. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use cpumask_var_t in uv_flush_tlb_others.Rusty Russell2009-03-13
| | | | | | | | | | | | Impact: remove cpumask_t, reduce per-cpu size for CONFIG_CPUMASK_OFFSTACK=y Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove cpumask_t assignment from vector_allocation_domain()Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup It's not legal to do assignments into cpumask_var_t; they will soon be of variable length. So explicitly clear the mask and set the first word, rather than using assignment. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: clean up summit's send_IPI functionsRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | Impact: cleanup, remove cpumask from stack summit_send_IPI_allbutself might as well call default_send_IPI_mask_allbutself_logical(). Also change cpumask_t to struct cpumask and &cpu_online_map to cpu_online_mask while here. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use new cpumask functions throughout x86Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | Impact: cleanup 1) &cpu_online_map -> cpu_online_mask 2) first_cpu/next_cpu_nr -> cpumask_first/cpumask_next 3) cpu_*_map manipulation -> init_cpu_* / set_cpu_* Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * x86: unify ↵Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | cpu_callin_mask/cpu_callout_mask/cpu_initialized_mask/cpu_sibling_setup_mask Impact: cleanup Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: convert struct cpuinfo_x86's llc_shared_map to cpumask_var_tRusty Russell2009-03-13
| | | | | | | | | | | | Impact: reduce kernel memory usage when CONFIG_CPUMASK_OFFSTACK=y Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: convert node_to_cpumask_map[] to cpumask_var_tRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | Impact: reduce kernel memory usage when CONFIG_CPUMASK_OFFSTACK=y Straightforward conversion: done for 32 and 64 bit kernels. node_to_cpumask_map is now a cpumask_var_t array. 64-bit used to be a dynamic cpumask_t array, and 32-bit used to be a static cpumask_t array. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * x86: unify 32 and 64-bit node_to_cpumask_mapRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We take the 64-bit code and use it on 32-bit as well. The new file is called mm/numa.c. In a minor cleanup, we use cpu_none_mask instead of declaring a local cpu_mask_none. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: convert arch/x86/kernel/cpu/mcheck/mce_64.cRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | Impact: reduce kernel memory usage when CONFIG_CPUMASK_OFFSTACK=y Simple conversion of mce_device_initialized to cpumask_var_t. We don't check the alloc_cpumask_var() return since it's boot-time only, and the misc_register() in that same function isn't checked. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: x86: convert cpu_sibling_map/cpu_core_map to cpumask_var_tRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | Impact: reduce per-cpu size for CONFIG_CPUMASK_OFFSTACK=y In most places it's cleaner to use the accessors cpu_sibling_mask() and cpu_core_mask() wrappers which already exist. I couldn't avoid cleaning up the access in oprofile, either. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: convert arch/x86/kernel/nmi.c's backtrace_mask to a cpumask_var_tRusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, reduce memory usage for CONFIG_CPUMASK_OFFSTACK=y I *think* every path calls check_nmi_watchdog before using the watchdog, so that's the right place for the initialization. If that's wrong, we'll get a nice NULL-deref with CONFIG_CPUMASK_OFFSTACK=y, and have uncovered another bug. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: convert c1e_mask in arch/x86/kernel/process.c to cpumask_var_t.Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | Impact: reduce kernel size when CONFIG_CPUMASK_OFFSTACK=y Simple conversion. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove cpu_coregroup_map: x86Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | Impact: cleanup cpu_coregroup_mask is the New Hotness. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove dangerous CPU_MASK_ALL_PTR, &CPU_MASK_ALL.: x86Rusty Russell2009-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup (Thanks to Al Viro for reminding me of this, via Ingo) CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so: #define CPU_MASK_ALL (cpumask_t) { { ... } } Taking the address of such a temporary is questionable at best, unfortunately 321a8e9d (cpumask: add CPU_MASK_ALL_PTR macro) added CPU_MASK_ALL_PTR: #define CPU_MASK_ALL_PTR (&CPU_MASK_ALL) Which formalizes this practice. One day gcc could bite us over this usage (though we seem to have gotten away with it so far). So replace everywhere which used &CPU_MASK_ALL or CPU_MASK_ALL_PTR with the modern "cpu_all_mask" (a real const struct cpumask *), and remove CPU_MASK_ALL_PTR altogether. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Reported-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Mike Travis <travis@sgi.com>
| * Merge branch 'x86/core' into cpus4096Ingo Molnar2009-03-11
| |\
| | *---. Merge branches 'x86/cleanups', 'x86/kexec', 'x86/mce2' and 'linus' into x86/coreIngo Molnar2009-03-11
| | |\ \ \
| | | | * | x86, mce: use round_jiffies() instead round_jiffies_relative()KOSAKI Motohiro2009-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: saving power _very_ little round_jiffies() round up absolute jiffies to full second. round_jiffies_relative() round up relative jiffies to full second. The "t->expires" is absolute jiffies. Then, round_jiffies() should be used instead round_jiffies_relative(). Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | * | | x86, kexec: x86_64: add kexec jump support for x86_64Huang Ying2009-03-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: New major feature This patch add kexec jump support for x86_64. More information about kexec jump can be found in corresponding x86_32 support patch. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | * | | x86, kexec: x86_64: add identity map for pages at image->startHuang Ying2009-03-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Fix corner case that cannot yet occur image->start may be outside of 0 ~ max_pfn, for example when jumping back to original kernel from kexeced kenrel. This patch add identity map for pages at image->start. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | * | | x86, kexec: fix kexec x86 coding styleHuang Ying2009-03-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Cleanup Fix some coding style issue for kexec x86. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | | | |
| | | | \ \
| | | | \ \
| | | | \ \
| | | *---. \ \ Merge branches 'x86/apic', 'x86/asm', 'x86/fixmap', 'x86/memtest', 'x86/mm', ↵Ingo Molnar2009-03-10
| | | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | 'x86/urgent', 'linus' and 'core/percpu' into x86/core
| | | | | | * | | x86: UV: remove uv_flush_tlb_others() WARN_ONCliff Wickman2009-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In uv_flush_tlb_others() (arch/x86/kernel/tlb_uv.c), the "WARN_ON(!in_atomic())" fails if CONFIG_PREEMPT is not enabled. And CONFIG_PREEMPT is not enabled by default in the distribution that most UV owners will use. We could #ifdef CONFIG_PREEMPT the warning, but that is not good form. And there seems to be no suitable fix to in_atomic() when CONFIG_PREMPT is not on. As Ingo commented: > and we have no proper primitive to test for atomicity. (mainly > because we dont know about atomicity on a non-preempt kernel) So we drop the WARN_ON. Signed-off-by: Cliff Wickman <cpw@sgi.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | * | | x86, percpu: setup reserved percpu area for x86_64Tejun Heo2009-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix relocation overflow during module load x86_64 uses 32bit relocations for symbol access and static percpu symbols whether in core or modules must be inside 2GB of the percpu segement base which the dynamic percpu allocator doesn't guarantee. This patch makes x86_64 reserve PERCPU_MODULE_RESERVE bytes in the first chunk so that module percpu areas are always allocated from the first chunk which is always inside the relocatable range. This problem exists for any percpu allocator but is easily triggered when using the embedding allocator because the second chunk is located beyond 2GB on it. This patch also changes the meaning of PERCPU_DYNAMIC_RESERVE such that it only indicates the size of the area to reserve for dynamic allocation as static and dynamic areas can be separate. New PERCPU_DYNAMIC_RESERVED is increased by 4k for both 32 and 64bits as the reserved area separation eats away some allocatable space and having slightly more headroom (currently between 4 and 8k after minimal boot sans module area) makes sense for common case performance. x86_32 can address anywhere from anywhere and doesn't need reserving. Mike Galbraith first reported the problem first and bisected it to the embedding percpu allocator commit. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Mike Galbraith <efault@gmx.de> Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org>
| | | | | | * | | percpu, module: implement reserved allocation and use it for module percpu ↵Tejun Heo2009-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | variables Impact: add reserved allocation functionality and use it for module percpu variables This patch implements reserved allocation from the first chunk. When setting up the first chunk, arch can ask to set aside certain number of bytes right after the core static area which is available only through a separate reserved allocator. This will be used primarily for module static percpu variables on architectures with limited relocation range to ensure that the module perpcu symbols are inside the relocatable range. If reserved area is requested, the first chunk becomes reserved and isn't available for regular allocation. If the first chunk also includes piggy-back dynamic allocation area, a separate chunk mapping the same region is created to serve dynamic allocation. The first one is called static first chunk and the second dynamic first chunk. Although they share the page map, their different area map initializations guarantee they serve disjoint areas according to their purposes. If arch doesn't setup reserved area, reserved allocation is handled like any other allocation. Signed-off-by: Tejun Heo <tj@kernel.org>
| | | | | | * | | x86: make embedding percpu allocator return excessive free spaceTejun Heo2009-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: reduce unnecessary memory usage on certain configurations Embedding percpu allocator allocates unit_size * smp_num_possible_cpus() bytes consecutively and use it for the first chunk. However, if the static area is small, this can result in excessive prellocated free space in the first chunk due to PCPU_MIN_UNIT_SIZE restriction. This patch makes embedding percpu allocator preallocate only what's necessary as described by PERPCU_DYNAMIC_RESERVE and return the leftover to the bootmem allocator. Signed-off-by: Tejun Heo <tj@kernel.org>
| | | | | | * | | percpu: use negative for auto for pcpu_setup_first_chunk() argumentsTejun Heo2009-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: argument semantic cleanup In pcpu_setup_first_chunk(), zero @unit_size and @dyn_size meant auto-sizing. It's okay for @unit_size as 0 doesn't make sense but 0 dynamic reserve size is valid. Alos, if arch @dyn_size is calculated from other parameters, it might end up passing in 0 @dyn_size and malfunction when the size is automatically adjusted. This patch makes both @unit_size and @dyn_size ssize_t and use -1 for auto sizing. Signed-off-by: Tejun Heo <tj@kernel.org>
| | | * | | | | | x86: remove smp_apply_quirks()/smp_checks()Yinghai Lu2009-03-08
| | | |/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup and code size reduction on 64-bit This code is only applied to Intel Pentium and AMD K7 32-bit cpus. Move those checks to intel_init()/amd_init() for 32-bit so 64-bit will not build this code. Also change to use cpu_index check to see if we need to emit warning. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <49B377D2.8030108@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | | Merge branch 'x86/uv' into x86/coreIngo Molnar2009-03-05
| | | |\ \ \ \ \
| | | | * | | | | x86: UV, SGI RTC: fix uv_time.c for UPDimitri Sivanich2009-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix non-smp build of uv_time.c. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> LKML-Reference: <20090304220246.GC6288@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | * | | | | x86: UV, SGI RTC: add UV RTC clocksource/clockeventsDimitri Sivanich2009-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides a high resolution clock/timer source using the SGI UV system-wide synchronized RTC clock/timer hardware. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: john stultz <johnstul@us.ibm.com> LKML-Reference: <20090304185918.GC24419@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | * | | | | x86: UV, SGI RTC: add generic system vectorDimitri Sivanich2009-03-04
| | | | | |/ / / | | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch allocates a system interrupt vector for various platform specific uses. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: john stultz <johnstul@us.ibm.com> LKML-Reference: <20090304185605.GA24419@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | | Merge branch 'x86/mm' into x86/coreIngo Molnar2009-03-05
| | | |\ \ \ \ \
| | | | * | | | | x86: pre-initialize boot_cpu_data.x86_phys_bits to avoid system_state testsJeremy Fitzhardinge2009-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, micro-optimization Pre-initialize boot_cpu_data.x86_phys_bits to a reasonable default to remove the use of system_state tests in __virt_addr_valid() and __phys_addr(). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | * | | | | x86: reserve exact size of mptableYinghai Lu2009-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: save a bit of RAM Get the exact size for the reserve_bootmem() call. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <49AE4922.605@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | * | | | | x86: ioremap mptableYinghai Lu2009-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix boot with mptable above max_low_mapped Try to use early_ioremap() to map MPC to make sure it works even it is at the end of ram. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <49AE4901.3090801@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Reported-and-tested-by: Kevin O'Connor <kevin@koconnor.net>
| | | | * | | | | Merge branch 'x86/urgent' into x86/mmIngo Molnar2009-03-04
| | | | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/include/asm/fixmap_64.h Semantic merge: arch/x86/include/asm/fixmap.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | \ \ \ \ \ Merge branch 'x86/mce2' into x86/coreIngo Molnar2009-03-05
| | | |\ \ \ \ \ \ \ | | | | | |_|_|_|/ / | | | | |/| | | | |
| | | | * | | | | | x86, mce: fix build failure in arch/x86/kernel/cpu/mcheck/threshold.cIngo Molnar2009-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: build fix The APIC code rewrite in the x86 tree broke the x86/mce branch: arch/x86/kernel/cpu/mcheck/threshold.c: In function ‘mce_threshold_interrupt’: arch/x86/kernel/cpu/mcheck/threshold.c:24: error: implicit declaration of function ‘ack_APIC_irq’ Also tidy up the file a bit while at it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | * | | | | | Merge branch 'x86/core' into x86/mce2H. Peter Anvin2009-03-04
| | | | |\ \ \ \ \ \
| | | | * \ \ \ \ \ \ Merge branch 'x86/core' into x86/mce2H. Peter Anvin2009-02-24
| | | | |\ \ \ \ \ \ \
| | | | * | | | | | | | x86, mce, cmci: recheck CMCI banks after APIC has been enabled on CPU #0Andi Kleen2009-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Fix marginal race condition One the first CPU the machine checks are enabled early before the local APIC is enabled. This could in theory lead to some lost CMCI events very early during boot because CMCIs cannot be delivered with disabled LAPIC. The poller also doesn't recover from this because it doesn't check CMCI banks. Add an explicit CMCI banks check after the LAPIC is enabled. This is only done for CPU #0, the other CPUs only initialize machine checks after the LAPIC is on. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | | * | | | | | | | x86, mce, cmci: disable CMCI on rebootingAndi Kleen2009-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Avoids confusing other OSes. Disable the CMCI vector on reboot to avoid confusing other OS. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | | * | | | | | | | x86, mce, cmci: remove incorrect __cpuinit/__cpuexit annotationsH. Peter Anvin2009-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Bug fix on UP The MCE code is reinitialized from resume, so we can't use __cpuinit/__cpuexit for most of the code. Remove those annotations for anything downstream of mce_init(). Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| | | | * | | | | | | | x86, mce, cmci: add CMCI supportAndi Kleen2009-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Major new feature Intel CMCI (Corrected Machine Check Interrupt) is a new feature on Nehalem CPUs. It allows the CPU to trigger interrupts on corrected events, which allows faster reaction to them instead of with the traditional polling timer. Also use CMCI to discover shared banks. Machine check banks can be shared by CPU threads or even cores. Using the CMCI enable bit it is possible to detect the fact that another CPU already saw a specific bank. Use this to assign shared banks only to one CPU to avoid reporting duplicated events. On CPU hot unplug bank sharing is re discovered. This is done using a thread that cycles through all the CPUs. To avoid races between the poller and CMCI we only poll for banks that are not CMCI capable and only check CMCI owned banks on a interrupt. The shared banks ownership information is currently only used for CMCI interrupts, not polled banks. The sharing discovery code follows the algorithm recommended in the IA32 SDM Vol3a 14.5.2.1 The CMCI interrupt handler just calls the machine check poller to pick up the machine check event that caused the interrupt. I decided not to implement a separate threshold event like the AMD version has, because the threshold is always one currently and adding another event didn't seem to add any value. Some code inspired by Yunhong Jiang's Xen implementation, which was in term inspired by a earlier CMCI implementation by me. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>