aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* Merge branch 'x86-microcode-for-linus' of ↵Linus Torvalds2012-10-01
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/microcode changes from Ingo Molnar: "The biggest changes are to AMD microcode patching: add code for caching all microcode patches which belong to the current family on which we're running, in the kernel. We look up the patch needed for each core from the cache at patch-application time instead of holding a single patch per-system" * 'x86-microcode-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, microcode, AMD: Fix use after free in free_cache() x86, microcode, AMD: Rewrite patch application procedure x86, microcode, AMD: Add a small, per-family patches cache x86, microcode, AMD: Add reverse equiv table search x86, microcode: Add a refresh firmware flag to ->request_microcode_fw x86, microcode, AMD: Read CPUID(1).EAX on the correct cpu x86, microcode, AMD: Check before applying a patch x86, microcode, AMD: Remove useless get_ucode_data wrapper x86, microcode: Straighten out Kconfig text x86, microcode: Cleanup cpu hotplug notifier callback x86, microcode: Drop uci->mc check on resume path x86, microcode: Save an indentation level in reload_for_cpu
| * Merge tag 'microcode_fix_3.7' of ↵Ingo Molnar2012-09-20
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp into x86/microcode Pull microcode changes from Borislav Petkov: "A small list usage correction from Dan Carpenter." Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * x86, microcode, AMD: Fix use after free in free_cache()Dan Carpenter2012-09-19
| |/ | | | | | | | | | | | | | | | | list_for_each_entry_reverse() dereferences the iterator, but we already freed it. I don't see a reason that this has to be done in reverse order so change it to use list_for_each_entry_safe(). Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
| * x86, microcode, AMD: Rewrite patch application procedureBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Limit the access to userspace only on the BSP where we load the container, verify the patches in it and put them in the patch cache. Then, at application time, we lookup the correct patch in the cache and use it. When we need to reload the userspace container, we do that over the reload interface: echo 1 > /sys/devices/system/cpu/microcode/reload which reloads (a possibly newer) container from userspace and applies then the newest patches from there. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-13-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode, AMD: Add a small, per-family patches cacheBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | This is a trivial cache which collects all ucode patches for the current family of CPUs on the system. If a newer patch appears due to the container file being updated in userspace, we replace our cached version with the new one. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-12-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode, AMD: Add reverse equiv table searchBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We search the equivalence table using the CPUID(1) signature of the CPU in order to get the equivalence ID of the patch which we need to apply. Add a function which does the reverse - it will be needed in later patches. While at it, pull the other equiv table function up in the file so that it can be used by other functionality without forward declarations. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-11-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode: Add a refresh firmware flag to ->request_microcode_fwBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | This is done in preparation for teaching the ucode driver to either load a new ucode patches container from userspace or use an already cached version. No functionality change in this patch. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-10-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode, AMD: Read CPUID(1).EAX on the correct cpuBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | Read the CPUID(1).EAX leaf at the correct cpu and use it to search the equivalence table for matching microcode patch. No functionality change. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-9-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode, AMD: Check before applying a patchBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure we're actually applying a microcode patch to a core which really needs it. This brings only a very very very minor slowdown on F10: 0.032218828 sec vs 0.056010626 sec with this patch. And small speedup on F15: 0.487089449 sec vs 0.180551162 sec (from perf output). Also, fixup comments while at it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-8-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode, AMD: Remove useless get_ucode_data wrapperBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | get_ucode_data was a trivial memcpy wrapper. Remove it so as not to obfuscate code unnecessarily with no obvious gain. No functional change. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-7-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode: Straighten out Kconfig textBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | Update and clarify Kconfig help text along with menu names. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-6-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode: Cleanup cpu hotplug notifier callbackBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Mask out CPU_TASKS_FROZEN bit so that all _FROZEN cases can be dropped. Also, add some more comments as to why CPU_ONLINE falls through to CPU_DOWN_FAILED (no break), and for the CPU_DEAD case. Realign debug printks better. Idea blatantly stolen from a tglx patch: http://marc.info/?l=linux-kernel&m=134267779513862 Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-5-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode: Drop uci->mc check on resume pathBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the uci->mc check on the cpu resume path because the low-level drivers do that anyway. More importantly, though, this fixes a contrived and obscure but still important case. Imagine the following: * boot machine, no new microcode in /lib/firmware * a subset of the CPUs is offlined * in the meantime, user puts new fresh microcode container into /lib/firmware and reloads it by doing $ echo 1 > /sys/devices/system/cpu/microcode/reload * offlined cores come back online and they don't get the newer microcode applied due to this check. Later patches take care of the issue on AMD. While at it, cleanup code around it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-4-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, microcode: Save an indentation level in reload_for_cpuBorislav Petkov2012-08-22
| | | | | | | | | | | | | | | | | | Invert the uci->valid check so that the later block can be aligned on the first indentation level of the function. No functional change. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344361461-10076-3-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | Merge branch 'x86-platform-for-linus' of ↵Linus Torvalds2012-10-01
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/platform changes from Ingo Molnar: "This cleans up some Xen-induced pagetable init code uglies, by generalizing new platform callbacks and state: x86_init.paging.*" * 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Document x86_init.paging.pagetable_init() x86: xen: Cleanup and remove x86_init.paging.pagetable_setup_done() x86: Move paging_init() call to x86_init.paging.pagetable_init() x86: Rename pagetable_setup_start() to pagetable_init() x86: Remove base argument from x86_init.paging.pagetable_setup_start
| * | x86: Document x86_init.paging.pagetable_init()Attilio Rao2012-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Attilio Rao <attilio.rao@citrix.com> Acked-by: <konrad.wilk@oracle.com> Cc: <Ian.Campbell@citrix.com> Cc: <Stefano.Stabellini@eu.citrix.com> Cc: <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/1345580561-8506-6-git-send-email-attilio.rao@citrix.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | x86: xen: Cleanup and remove x86_init.paging.pagetable_setup_done()Attilio Rao2012-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At this stage x86_init.paging.pagetable_setup_done is only used in the XEN case. Move its content in the x86_init.paging.pagetable_init setup function and remove the now unused x86_init.paging.pagetable_setup_done remaining infrastructure. Signed-off-by: Attilio Rao <attilio.rao@citrix.com> Acked-by: <konrad.wilk@oracle.com> Cc: <Ian.Campbell@citrix.com> Cc: <Stefano.Stabellini@eu.citrix.com> Cc: <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/1345580561-8506-5-git-send-email-attilio.rao@citrix.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | x86: Move paging_init() call to x86_init.paging.pagetable_init()Attilio Rao2012-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the paging_init() call to the platform specific pagetable_init() function, so we can get rid of the extra pagetable_setup_done() function pointer. Signed-off-by: Attilio Rao <attilio.rao@citrix.com> Acked-by: <konrad.wilk@oracle.com> Cc: <Ian.Campbell@citrix.com> Cc: <Stefano.Stabellini@eu.citrix.com> Cc: <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/1345580561-8506-4-git-send-email-attilio.rao@citrix.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | x86: Rename pagetable_setup_start() to pagetable_init()Attilio Rao2012-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for unifying the pagetable_setup_start() and pagetable_setup_done() setup functions, rename appropriately all the infrastructure related to pagetable_setup_start(). Signed-off-by: Attilio Rao <attilio.rao@citrix.com> Ackedd-by: <konrad.wilk@oracle.com> Cc: <Ian.Campbell@citrix.com> Cc: <Stefano.Stabellini@eu.citrix.com> Cc: <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/1345580561-8506-3-git-send-email-attilio.rao@citrix.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | x86: Remove base argument from x86_init.paging.pagetable_setup_startAttilio Rao2012-09-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We either use swapper_pg_dir or the argument is unused. Preparatory patch to simplify platform pagetable setup further. Signed-off-by: Attilio Rao <attilio.rao@citrix.com> Ackedb-by: <konrad.wilk@oracle.com> Cc: <Ian.Campbell@citrix.com> Cc: <Stefano.Stabellini@eu.citrix.com> Cc: <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/1345580561-8506-2-git-send-email-attilio.rao@citrix.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | | Merge branch 'x86-mm-for-linus' of ↵Linus Torvalds2012-10-01
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/mm changes from Ingo Molnar: "The biggest change is new TLB partial flushing code for AMD CPUs. (The v3.6 kernel had the Intel CPU side code, see commits e0ba94f14f74..effee4b9b3b.) There's also various other refinements around the TLB flush code" * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Distinguish TLB shootdown interrupts from other functions call interrupts x86/mm: Fix range check in tlbflush debugfs interface x86, cpu: Preset default tlb_flushall_shift on AMD x86, cpu: Add AMD TLB size detection x86, cpu: Push TLB detection CPUID check down x86, cpu: Fixup tlb_flushall_shift formatting
| * | | x86: Distinguish TLB shootdown interrupts from other functions call interruptsTomoki Sekiyama2012-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As TLB shootdown requests to other CPU cores are now using function call interrupts, TLB shootdowns entry in /proc/interrupts is always shown as 0. This behavior change was introduced by commit 52aec3308db8 ("x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR"). This patch reverts TLB shootdowns entry in /proc/interrupts to count TLB shootdowns separately from the other function call interrupts. Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama.qu@hitachi.com> Link: http://lkml.kernel.org/r/20120926021128.22212.20440.stgit@hpxw Acked-by: Alex Shi <alex.shi@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | x86/mm: Fix range check in tlbflush debugfs interfaceJan Beulich2012-09-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the shift count settable there is used for shifting values of type "unsigned long", its value must not match or exceed BITS_PER_LONG (otherwise the shift operations are undefined). Similarly, the value must not be negative (but -1 must be permitted, as that's the value used to distinguish the case of the fine grained flushing being disabled). Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: Alex Shi <alex.shi@intel.com> Link: http://lkml.kernel.org/r/5049B65C020000780009990C@nat28.tlf.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | x86, cpu: Preset default tlb_flushall_shift on AMDBorislav Petkov2012-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Run the mprotect.c microbenchmark on all our families >= K8 and preset the flushall shift variable accordingly. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-5-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | x86, cpu: Add AMD TLB size detectionBorislav Petkov2012-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Read I- and DTLB entries count from CPUID on AMD. Handle all the different family-specific cases. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-4-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | x86, cpu: Push TLB detection CPUID check downBorislav Petkov2012-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push the max CPUID leaf check into the ->detect_tlb function and remove general test case from the generic path. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-3-git-send-email-bp@amd64.org Acked-by: Alex Shi <alex.shi@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | x86, cpu: Fixup tlb_flushall_shift formattingBorislav Petkov2012-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The TLB characteristics appeared like this in dmesg: [ 0.065817] Last level iTLB entries: 4KB 512, 2MB 1024, 4MB 512 [ 0.065817] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 0.065817] tlb_flushall_shift is 0xffffffff where tlb_flushall_shift is actually -1 but dumped as a hex number. However, the Kconfig option CONFIG_DEBUG_TLBFLUSH and the rest of the code treats this as a signed decimal and states "If you set it to -1, the code flushes the whole TLB unconditionally." So, fix its formatting in accordance with the other references to it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-2-git-send-email-bp@amd64.org Acked-by: Alex Shi <alex.shi@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* | | | Merge branch 'x86-mce-for-linus' of ↵Linus Torvalds2012-10-01
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/MCE update from Ingo Molnar: "Various MCE robustness enhancements. One of the changes adds CMCI (Corrected Machine Check Interrupt) poll mode on Intel Nehalem+ CPUs, which mode is automatically entered when the rate of messages is too high - and exited once the storm is over. An MCE events storm will roughly look like this: [ 5342.740616] mce: [Hardware Error]: Machine check events logged [ 5342.746501] mce: [Hardware Error]: Machine check events logged [ 5342.757971] CMCI storm detected: switching to poll mode [ 5372.674957] CMCI storm subsided: switching to interrupt mode This should make such events more survivable" * 'x86-mce-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mce: Provide boot argument to honour bios-set CMCI threshold x86, MCE: Remove unused defines x86, mce: Enable MCA support by default x86/mce: Add CMCI poll mode x86/mce: Make cmci_discover() quiet x86: mce: Remove the frozen cases in the hotplug code x86: mce: Split timer init x86: mce: Serialize mce injection x86: mce: Disable preemption when calling raise_local()
| * \ \ \ Merge tag 'please-pull-naveen' of ↵Ingo Molnar2012-09-29
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras into x86/mce Pull MCE updates from Tony Luck: "Option to let the bios set per-bank CMCI thresholds so they can filter noisy error sources at a fine grained level based on platform specific knowledge." Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | | | x86/mce: Provide boot argument to honour bios-set CMCI thresholdNaveen N. Rao2012-09-27
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ACPI spec doesn't provide for a way for the bios to pass down recommended thresholds to the OS on a _per-bank_ basis. This patch adds a new boot option, which if passed, tells Linux to use CMCI thresholds set by the bios. As fail-safe, we initialize threshold to 1 if some banks have not been initialized by the bios and warn the user. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | Merge tag 'ras_queue_for_3.7' of ↵Ingo Molnar2012-09-19
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras into x86/mce Pull MCE changes from Borislav Petkov: " Patch 1/2 which enables MCA by default because I still see bugreports where CONFIG_X86_MCE is disabled and this is a bad idea so turning it on by default makes sense to me. The second one is a trivial cleanup. " Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | | | x86, MCE: Remove unused definesBorislav Petkov2012-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Those were sitting there unused since the dawn of time, drop them. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
| | * | | | x86, mce: Enable MCA support by defaultBorislav Petkov2012-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MCA is the basic support for hardware error logging and reporting, and it is majorly unwise to run without it so enable machine check software support by default on x86. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Acked-by: Tony Luck <tony.luck@intel.com>
| * | | | | Merge tag 'v3.6-rc6' into x86/mceIngo Molnar2012-09-19
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge Linux v3.6-rc6, to refresh this tree. Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | x86/mce: Add CMCI poll modeChen Gong2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Intel systems corrected machine check interrupts (CMCI) may be sent to multiple logical processors; possibly to all processors on the affected socket (SDM Volume 3B "15.5.1 CMCI Local APIC Interface"). This means that a persistent error (such as a stuck bit in ECC memory) may cause a storm of interrupts that greatly hinders or prevents forward progress (probably on many processors). To solve this we keep track of the rate at which each processor sees CMCI. If we exceed a threshold, we disable CMCI delivery and switch to polling the machine check banks. If the storm subsides (none of the affected processors see any more errors for a complete poll interval) we re-enable CMCI. [Tony: Added console messages when storm begins/ends and increased storm threshold from 5 to 15 so we have a few more logged entries before we disable interrupts and start dropping reports] Signed-off-by: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | | x86/mce: Make cmci_discover() quietTony Luck2012-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cmci_discover() works out which machine check banks support CMCI, and which of those are shared by multiple logical processors. It uses this information to ensure that exactly one cpu is designated the owner of each bank so that when interrupts are broadcast to multiple cpus, only one of them will look in a shared bank to log the error and clear the bank. At boot time cmci_discover() performs this task silently. But during certain cpu hotplug operations it prints out a set of summary lines like this: CPU 35 MCA banks CMCI:0 CMCI:1 CMCI:3 CMCI:5 CMCI:6 CMCI:7 CMCI:8 CMCI:9 CMCI:10 CMCI:11 CPU 1 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 39 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 38 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 32 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 37 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 36 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 34 MCA banks CMCI:0 CMCI:1 CMCI:3 The value of these messages seems very low. A user might painstakingly cross-check against the data sheet for a processor to ensure that all CMCI supported banks are correctly reported, but this seems improbable. If users really wanted to do this, we should print the information at boot time too. Remove the messages. Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | | x86: mce: Remove the frozen cases in the hotplug codeThomas Gleixner2012-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No point in having double cases if we can simply mask the FROZEN bit out. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | | x86: mce: Split timer initThomas Gleixner2012-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split timer init function into the init and the start part, so the start part can replace the open coded version in CPU_DOWN_FAILED. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chen Gong <gong.chen@linux.intel.com> Acked-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | | x86: mce: Serialize mce injectionThomas Gleixner2012-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | raise_mce() fiddles with global state, but lacks any kind of serialization. Add a mutex around the raise_mce() call, so concurrent writers do not stomp on each other toes. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | | x86: mce: Disable preemption when calling raise_local()Thomas Gleixner2012-08-03
| | |/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | raise_mce() has a code path which does not disable preemption when the raise_local() is called. The per cpu variable access in raise_local() depends on preemption being disabled to be functional. So that code path was either never tested or never tested with CONFIG_DEBUG_PREEMPT enabled. Add the missing preempt_disable/enable() pair around the call. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | | | Merge branch 'x86-fpu-for-linus' of ↵Linus Torvalds2012-10-01
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/fpu update from Ingo Molnar: "The biggest change is the addition of the non-lazy (eager) FPU saving support model and enabling it on CPUs with optimized xsaveopt/xrstor FPU state saving instructions. There are also various Sparse fixes" Fix up trivial add-add conflict in arch/x86/kernel/traps.c * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, kvm: fix kvm's usage of kernel_fpu_begin/end() x86, fpu: remove cpu_has_xmm check in the fx_finit() x86, fpu: make eagerfpu= boot param tri-state x86, fpu: enable eagerfpu by default for xsaveopt x86, fpu: decouple non-lazy/eager fpu restore from xsave x86, fpu: use non-lazy fpu restore for processors supporting xsave lguest, x86: handle guest TS bit for lazy/non-lazy fpu host models x86, fpu: always use kernel_fpu_begin/end() for in-kernel FPU usage x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu() x86, fpu: remove unnecessary user_fpu_end() in save_xstate_sig() x86, fpu: drop_fpu() before restoring new state from sigframe x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels x86, fpu: Consolidate inline asm routines for saving/restoring fpu state x86, signal: Cleanup ifdefs and is_ia32, is_x32
| * | | | | x86, kvm: fix kvm's usage of kernel_fpu_begin/end()Suresh Siddha2012-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Preemption is disabled between kernel_fpu_begin/end() and as such it is not a good idea to use these routines in kvm_load/put_guest_fpu() which can be very far apart. kvm_load/put_guest_fpu() routines are already called with preemption disabled and KVM already uses the preempt notifier to save the guest fpu state using kvm_put_guest_fpu(). So introduce __kernel_fpu_begin/end() routines which don't touch preemption and use them instead of kernel_fpu_begin/end() for KVM's use model of saving/restoring guest FPU state. Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit state in the case of VMX. For eagerFPU case, host cr0.TS is always clear. So no need to worry about it. For the traditional lazyFPU restore case, change the cr0.TS bit for the host state during vm-exit to be always clear and cr0.TS bit is set in the __vmx_load_host_state() when the FPU (guest FPU or the host task's FPU) state is not active. This ensures that the host/guest FPU state is properly saved, restored during context-switch and with interrupts (using irq_fpu_usable()) not stomping on the active FPU state. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1348164109.26695.338.camel@sbsiddha-desk.sc.intel.com Cc: Avi Kivity <avi@redhat.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: remove cpu_has_xmm check in the fx_finit()Suresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPUs with FXSAVE but no XMM/MXCSR (Pentium II from Intel, Crusoe/TM-3xxx/5xxx from Transmeta, and presumably some of the K6 generation from AMD) ever looked at the mxcsr field during fxrstor/fxsave. So remove the cpu_has_xmm check in the fx_finit() Reported-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-6-git-send-email-suresh.b.siddha@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: make eagerfpu= boot param tri-stateSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the "eagerfpu=auto" (that selects the default scheme in enabling eagerfpu) which can override compiled-in boot parameters like "eagerfpu=on/off" (that force enable/disable eagerfpu). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-5-git-send-email-suresh.b.siddha@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: enable eagerfpu by default for xsaveoptSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xsaveopt/xrstor support optimized state save/restore by tracking the INIT state and MODIFIED state during context-switch. Enable eagerfpu by default for processors supporting xsaveopt. Can be disabled by passing "eagerfpu=off" boot parameter. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-3-git-send-email-suresh.b.siddha@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: decouple non-lazy/eager fpu restore from xsaveSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decouple non-lazy/eager fpu restore policy from the existence of the xsave feature. Introduce a synthetic CPUID flag to represent the eagerfpu policy. "eagerfpu=on" boot paramter will enable the policy. Requested-by: H. Peter Anvin <hpa@zytor.com> Requested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-2-git-send-email-suresh.b.siddha@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: use non-lazy fpu restore for processors supporting xsaveSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fundamental model of the current Linux kernel is to lazily init and restore FPU instead of restoring the task state during context switch. This changes that fundamental lazy model to the non-lazy model for the processors supporting xsave feature. Reasons driving this model change are: i. Newer processors support optimized state save/restore using xsaveopt and xrstor by tracking the INIT state and MODIFIED state during context-switch. This is faster than modifying the cr0.TS bit which has serializing semantics. ii. Newer glibc versions use SSE for some of the optimized copy/clear routines. With certain workloads (like boot, kernel-compilation etc), application completes its work with in the first 5 task switches, thus taking upto 5 #DNA traps with the kernel not getting a chance to apply the above mentioned pre-load heuristic. iii. Some xstate features (like AMD's LWP feature) don't honor the cr0.TS bit and thus will not work correctly in the presence of lazy restore. Non-lazy state restore is needed for enabling such features. Some data on a two socket SNB system: * Saved 20K DNA exceptions during boot on a two socket SNB system. * Saved 50K DNA exceptions during kernel-compilation workload. * Improved throughput of the AVX based checksumming function inside the kernel by ~15% as xsave/xrstor is faster than the serializing clts/stts pair. Also now kernel_fpu_begin/end() relies on the patched alternative instructions. So move check_fpu() which uses the kernel_fpu_begin/end() after alternative_instructions(). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1345842782-24175-7-git-send-email-suresh.b.siddha@intel.com Merge 32-bit boot fix from, Link: http://lkml.kernel.org/r/1347300665-6209-4-git-send-email-suresh.b.siddha@intel.com Cc: Jim Kukunas <james.t.kukunas@linux.intel.com> Cc: NeilBrown <neilb@suse.de> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | lguest, x86: handle guest TS bit for lazy/non-lazy fpu host modelsSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using unlazy_fpu() check if user_has_fpu() and set/clear the host TS bits so that the lguest works fine with both the lazy/non-lazy FPU host models with minimal changes. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1345842782-24175-6-git-send-email-suresh.b.siddha@intel.com Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, fpu: always use kernel_fpu_begin/end() for in-kernel FPU usageSuresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | use kernel_fpu_begin/end() instead of unconditionally accessing cr0 and saving/restoring just the few used xmm/ymm registers. This has some advantages like: * If the task's FPU state is already active, then kernel_fpu_begin() will just save the user-state and avoiding the read/write of cr0. In general, cr0 accesses are much slower. * Manual save/restore of xmm/ymm registers will affect the 'modified' and the 'init' optimizations brought in the by xsaveopt/xrstor infrastructure. * Foward compatibility with future vector register extensions will be a problem if the xmm/ymm registers are manually saved and restored (corrupting the extended state of those vector registers). With this patch, there was no significant difference in the xor throughput using AVX, measured during boot. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1345842782-24175-5-git-send-email-suresh.b.siddha@intel.com Cc: Jim Kukunas <james.t.kukunas@linux.intel.com> Cc: NeilBrown <neilb@suse.de> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()Suresh Siddha2012-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | kvm's guest fpu save/restore should be wrapped around kernel_fpu_begin/end(). This will avoid for example taking a DNA in kvm_load_guest_fpu() when it tries to load the fpu immediately after doing unlazy_fpu() on the host side. More importantly this will prevent the host process fpu from being corrupted. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1345842782-24175-4-git-send-email-suresh.b.siddha@intel.com Cc: Avi Kivity <avi@redhat.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>