aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/cpu/amd.c
Commit message (Collapse)AuthorAge
* Merge branch 'for-linus' of ↵Linus Torvalds2013-11-15
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial Pull trivial tree updates from Jiri Kosina: "Usual earth-shaking, news-breaking, rocket science pile from trivial.git" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (23 commits) doc: usb: Fix typo in Documentation/usb/gadget_configs.txt doc: add missing files to timers/00-INDEX timekeeping: Fix some trivial typos in comments mm: Fix some trivial typos in comments irq: Fix some trivial typos in comments NUMA: fix typos in Kconfig help text mm: update 00-INDEX doc: Documentation/DMA-attributes.txt fix typo DRM: comment: `halve' -> `half' Docs: Kconfig: `devlopers' -> `developers' doc: typo on word accounting in kprobes.c in mutliple architectures treewide: fix "usefull" typo treewide: fix "distingush" typo mm/Kconfig: Grammar s/an/a/ kexec: Typo s/the/then/ Documentation/kvm: Update cpuid documentation for steal time and pv eoi treewide: Fix common typo in "identify" __page_to_pfn: Fix typo in comment Correct some typos for word frequency clk: fixed-factor: Fix a trivial typo ...
| * treewide: fix "distingush" typoMichael Opdenacker2013-10-14
| | | | | | | | | | Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | x86/cpu: Track legacy CPU model data only on 32-bit kernelsJan Beulich2013-10-26
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | struct cpu_dev's c_models is only ever set inside CONFIG_X86_32 conditionals (or code that's being built for 32-bit only), so there's no use of reserving the (empty) space for the model names in a 64-bit kernel. Similarly, c_size_cache is only used in the #else of a CONFIG_X86_64 conditional, so reserving space for (and in one case even initializing) that field is pointless for 64-bit kernels too. While moving both fields to the end of the structure, I also noticed that: - the c_models array size was one too small, potentially causing table_lookup_model() to return garbage on Intel CPUs (intel.c's instance was lacking the sentinel with family being zero), so the patch bumps that by one, - c_models' vendor sub-field was unused (and anyway redundant with the base structure's c_x86_vendor field), so the patch deletes it. Also rename the legacy fields so that their legacy nature stands out and comment their declarations. Signed-off-by: Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/5265036802000078000FC4DB@nat28.tlf.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge branch 'x86-asmlinkage-for-linus' of ↵Linus Torvalds2013-09-04
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/asmlinkage changes from Ingo Molnar: "As a preparation for Andi Kleen's LTO patchset (link time optimizations using GCC's -flto which build time optimization has steadily increased in quality over the past few years and might eventually be usable for the kernel too) this tree includes a handful of preparatory patches that make function calling convention annotations consistent again: - Mark every function without arguments (or 64bit only) that is used by assembly code with asmlinkage() - Mark every function with parameters or variables that is used by assembly code as __visible. For the vanilla kernel this has documentation, consistency and debuggability advantages, for the time being" * 'x86-asmlinkage-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/asmlinkage: Fix warning in xen asmlinkage change x86, asmlinkage, vdso: Mark vdso variables __visible x86, asmlinkage, power: Make various symbols used by the suspend asm code visible x86, asmlinkage: Make dump_stack visible x86, asmlinkage: Make 64bit checksum functions visible x86, asmlinkage, paravirt: Add __visible/asmlinkage to xen paravirt ops x86, asmlinkage, apm: Make APM data structure used from assembler visible x86, asmlinkage: Make syscall tables visible x86, asmlinkage: Make several variables used from assembler/linker script visible x86, asmlinkage: Make kprobes code visible and fix assembler code x86, asmlinkage: Make various syscalls asmlinkage x86, asmlinkage: Make 32bit/64bit __switch_to visible x86, asmlinkage: Make _*_start_kernel visible x86, asmlinkage: Make all interrupt handlers asmlinkage / __visible x86, asmlinkage: Change dotraplinkage into __visible on 32bit x86: Fix sys_call_table type in asm/syscall.h
| * x86, asmlinkage: Make several variables used from assembler/linker script ↵Andi Kleen2013-08-06
| | | | | | | | | | | | | | | | | | | | visible Plus one function, load_gs_index(). Signed-off-by: Andi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1375740170-7446-10-git-send-email-andi@firstfloor.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | x86, microcode, AMD: Make cpu_has_amd_erratum() use the correct struct ↵Torsten Kaiser2013-08-12
|/ | | | | | | | | | | | | | | | | | | | | | | | | cpuinfo_x86 cpu_has_amd_erratum() is buggy, because it uses the per-cpu cpu_info before it is filled by smp_store_boot_cpu_info() / smp_store_cpu_info(). If early microcode loading is enabled its collect_cpu_info_amd_early() will fill ->x86 and so the fallback to boot_cpu_data is not used. But ->x86_vendor was not filled and is still X86_VENDOR_INTEL resulting in no errata fixes getting applied and my system hangs on boot. Using cpu_info in cpu_has_amd_erratum() is wrong anyway: its only caller init_amd() will have a struct cpuinfo_x86 as parameter and the set_cpu_bug() that is controlled by cpu_has_amd_erratum() also only uses that struct. So pass the struct cpuinfo_x86 from init_amd() to cpu_has_amd_erratum() and the broken fallback can be dropped. [ Boris: Drop WARN_ON() since we're called only from init_amd() ] Signed-off-by: Torsten Kaiser <just.for.lkml@googlemail.com> Signed-off-by: Borislav Petkov <bp@suse.de>
* x86: delete __cpuinit usage from all x86 filesPaul Gortmaker2013-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* mm/x86: prepare for removing num_physpages and simplify mem_init()Jiang Liu2013-07-03
| | | | | | | | | | | | | | | Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Jianguo Wu <wujianguo@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86, CPU, AMD: Drop useless labelBorislav Petkov2013-04-16
| | | | | | | | | All we want to do is return from this function so stop jumping around like a flea for no good reason. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1365436666-9837-5-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, AMD: Correct {rd,wr}msr_amd_safe warningsBorislav Petkov2013-04-16
| | | | | | | | | | | | | | | | | The idea with those routines is to slowly phase them out and not call them on anything else besides K8. They even have a check for that which, when called too early, fails. Let me explain: It gets the cpuinfo_x86 pointer from the per_cpu array and when this happens for cpu0, before its boot_cpu_data has been copied back to the per_cpu array in smp_store_boot_cpu_info(), we get an empty struct and thus the check fails. Use boot_cpu_data directly instead. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1365436666-9837-4-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu: Convert AMD Erratum 400Borislav Petkov2013-04-02
| | | | | | | | | | | Convert AMD erratum 400 to the bug infrastructure. Then, retract all exports for modules since they're not needed now and make the AMD erratum checking machinery local to amd.c. Use forward declarations to avoid shuffling too much code around needlessly. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1363788448-31325-7-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu: Convert AMD Erratum 383Borislav Petkov2013-04-02
| | | | | | | | | | Convert the AMD erratum 383 testing code to the bug infrastructure. This allows keeping the AMD-specific erratum testing machinery private to amd.c and not export symbols to modules needlessly. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1363788448-31325-6-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* Merge tag 'modules-next-for-linus' of ↵Linus Torvalds2013-02-25
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux Pull module update from Rusty Russell: "The sweeping change is to make add_taint() explicitly indicate whether to disable lockdep, but it's a mechanical change." * tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: MODSIGN: Add option to not sign modules during modules_install MODSIGN: Add -s <signature> option to sign-file MODSIGN: Specify the hash algorithm on sign-file command line MODSIGN: Simplify Makefile with a Kconfig helper module: clean up load_module a little more. modpost: Ignore ARC specific non-alloc sections module: constify within_module_* taint: add explicit flag to show whether lock dep is still OK. module: printk message when module signature fail taints kernel.
| * taint: add explicit flag to show whether lock dep is still OK.Rusty Russell2013-01-21
| | | | | | | | | | | | | | Fix up all callers as they were before, with make one change: an unsigned module taints the kernel, but doesn't turn off lockdep. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* | Merge branch 'x86-mm-for-linus' of ↵Linus Torvalds2013-02-21
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 mm changes from Peter Anvin: "This is a huge set of several partly interrelated (and concurrently developed) changes, which is why the branch history is messier than one would like. The *really* big items are two humonguous patchsets mostly developed by Yinghai Lu at my request, which completely revamps the way we create initial page tables. In particular, rather than estimating how much memory we will need for page tables and then build them into that memory -- a calculation that has shown to be incredibly fragile -- we now build them (on 64 bits) with the aid of a "pseudo-linear mode" -- a #PF handler which creates temporary page tables on demand. This has several advantages: 1. It makes it much easier to support things that need access to data very early (a followon patchset uses this to load microcode way early in the kernel startup). 2. It allows the kernel and all the kernel data objects to be invoked from above the 4 GB limit. This allows kdump to work on very large systems. 3. It greatly reduces the difference between Xen and native (Xen's equivalent of the #PF handler are the temporary page tables created by the domain builder), eliminating a bunch of fragile hooks. The patch series also gets us a bit closer to W^X. Additional work in this pull is the 64-bit get_user() work which you were also involved with, and a bunch of cleanups/speedups to __phys_addr()/__pa()." * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (105 commits) x86, mm: Move reserving low memory later in initialization x86, doc: Clarify the use of asm("%edx") in uaccess.h x86, mm: Redesign get_user with a __builtin_choose_expr hack x86: Be consistent with data size in getuser.S x86, mm: Use a bitfield to mask nuisance get_user() warnings x86/kvm: Fix compile warning in kvm_register_steal_time() x86-32: Add support for 64bit get_user() x86-32, mm: Remove reference to alloc_remap() x86-32, mm: Remove reference to resume_map_numa_kva() x86-32, mm: Rip out x86_32 NUMA remapping code x86/numa: Use __pa_nodebug() instead x86: Don't panic if can not alloc buffer for swiotlb mm: Add alloc_bootmem_low_pages_nopanic() x86, 64bit, mm: hibernate use generic mapping_init x86, 64bit, mm: Mark data/bss/brk to nx x86: Merge early kernel reserve for 32bit and 64bit x86: Add Crash kernel low reservation x86, kdump: Remove crashkernel range find limit for 64bit memblock: Add memblock_mem_size() x86, boot: Not need to check setup_header version for setup_data ...
| * | Merge remote-tracking branch 'origin/x86/boot' into x86/mm2H. Peter Anvin2013-01-29
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Coming patches to x86/mm2 require the changes and advanced baseline in x86/boot. Resolved Conflicts: arch/x86/kernel/setup.c mm/nobootmem.c Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | x86, mm: kill numa_64.hYinghai Lu2012-11-17
| | | | | | | | | | | | | | | | | | Signed-off-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1353123563-3103-44-git-send-email-yinghai@kernel.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | x86, mm: Fixup code testing if a pfn is direct mappedJacob Shin2012-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update code that previously assumed pfns [ 0 - max_low_pfn_mapped ) and [ 4GB - max_pfn_mapped ) were always direct mapped, to now look up pfn_mapped ranges instead. -v2: change applying sequence to keep git bisecting working. so add dummy pfn_range_is_mapped(). - Yinghai Lu Signed-off-by: Jacob Shin <jacob.shin@amd.com> Link: http://lkml.kernel.org/r/1353123563-3103-12-git-send-email-yinghai@kernel.org Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | Merge branch 'x86-cpu-for-linus' of ↵Linus Torvalds2013-02-21
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu updates from Peter Anvin: "This is a corrected attempt at the x86/cpu branch, this time with the fixes in that makes it not break on KVM (current or past), or any other virtualizer which traps on this configuration. Again, the biggest change here is enabling the WC+ memory type on AMD processors, if the BIOS doesn't." * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, kvm: Add MSR_AMD64_BU_CFG2 to the list of ignored MSRs x86, cpu, amd: Fix WC+ workaround for older virtual hosts x86, AMD: Enable WC+ memory type on family 10 processors x86, AMD: Clean up init_amd() x86/process: Change %8s to %s for pr_warn() in release_thread() x86/cpu/hotplug: Remove CONFIG_EXPERIMENTAL dependency
| * | | x86, cpu, amd: Fix WC+ workaround for older virtual hostsBorislav Petkov2013-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The WC+ workaround for F10h introduces a new MSR and kvm host #GPs on accesses to unknown MSRs if paravirt is not compiled in. Use the exception-handling MSR accessors so as not to break 3.8 and later guests booting on older hosts. Remove a redundant family check while at it. Cc: Gleb Natapov <gleb@redhat.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1361298793-31834-1-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | x86, AMD: Enable WC+ memory type on family 10 processorsBoris Ostrovsky2013-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases BIOS may not enable WC+ memory type on family 10 processors, instead converting what would be WC+ memory to CD type. On guests using nested pages this could result in performance degradation. This patch enables WC+. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com> Link: http://lkml.kernel.org/r/1359495169-23278-1-git-send-email-ostr@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | x86, AMD: Clean up init_amd()Boris Ostrovsky2013-01-31
| | |/ | |/| | | | | | | | | | | | | | | | | | | Clean up multiple declarations of variable used for rd/wrmsr. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com> Link: http://lkml.kernel.org/r/1359495136-23244-1-git-send-email-ostr@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* / | x86, AMD, NB: Use u16 for northbridge IDs in amd_get_nb_idDaniel J Blueman2013-01-10
|/ / | | | | | | | | | | | | | | | | Change amd_get_nb_id to return u16 to support >255 memory controllers, and related consistency fixes. Signed-off-by: Daniel J Blueman <daniel@numascale-asia.com> Link: http://lkml.kernel.org/r/1353997932-8475-2-git-send-email-daniel@numascale-asia.com Signed-off-by: Borislav Petkov <bp@alien8.de>
* | Merge branch 'x86-nuke386-for-linus' of ↵Linus Torvalds2012-12-11
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull "Nuke 386-DX/SX support" from Ingo Molnar: "This tree removes ancient-386-CPUs support and thus zaps quite a bit of complexity: 24 files changed, 56 insertions(+), 425 deletions(-) ... which complexity has plagued us with extra work whenever we wanted to change SMP primitives, for years. Unfortunately there's a nostalgic cost: your old original 386 DX33 system from early 1991 won't be able to boot modern Linux kernels anymore. Sniff." I'm not sentimental. Good riddance. * 'x86-nuke386-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, 386 removal: Document Nx586 as a 386 and thus unsupported x86, cleanups: Simplify sync_core() in the case of no CPUID x86, 386 removal: Remove CONFIG_X86_POPAD_OK x86, 386 removal: Remove CONFIG_X86_WP_WORKS_OK x86, 386 removal: Remove CONFIG_INVLPG x86, 386 removal: Remove CONFIG_BSWAP x86, 386 removal: Remove CONFIG_XADD x86, 386 removal: Remove CONFIG_CMPXCHG x86, 386 removal: Remove CONFIG_M386 from Kconfig
| * | x86, 386 removal: Remove CONFIG_INVLPGH. Peter Anvin2012-11-29
| | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support INVLPG, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-6-git-send-email-hpa@linux.intel.com
* | | Merge branch 'x86-cpu-for-linus' of ↵Linus Torvalds2012-12-11
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 topology discovery improvements from Ingo Molnar: "These changes improve topology discovery on AMD CPUs. Right now this feeds information displayed in /sys/devices/system/cpu/cpuX/cache/indexY/* - but in the future we could use this to set up a better scheduling topology." * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d on AMD x86, cacheinfo: Make use of CPUID 0x8000001d for cache information on AMD x86, cacheinfo: Determine number of cache leafs using CPUID 0x8000001d on AMD x86: Add cpu_has_topoext
| * | x86, cacheinfo: Determine number of cache leafs using CPUID 0x8000001d on AMDAndreas Herrmann2012-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | CPUID 0x8000001d works quite similar to Intels' CPUID function 4. Use it to determine number of cache leafs. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019085933.GE26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | x86: Add cpu_has_topoextAndreas Herrmann2012-11-13
| |/ | | | | | | | | | | | | | | | | | | | | | | | | Introduce cpu_has_topoext to check for AMD's CPUID topology extensions support. It indicates support for CPUID Fn8000_001D_EAX_x[N:0]-CPUID Fn8000_001E_EDX See AMD's CPUID Specification, Publication # 25481 (as of Rev. 2.34 September 2010) Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019085813.GD26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* / x86, amd: Disable way access filter on Piledriver CPUsAndre Przywara2012-10-31
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Way Access Filter in recent AMD CPUs may hurt the performance of some workloads, caused by aliasing issues in the L1 cache. This patch disables it on the affected CPUs. The issue is similar to that one of last year: http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html This new patch does not replace the old one, we just need another quirk for newer CPUs. The performance penalty without the patch depends on the circumstances, but is a bit less than the last year's 3%. The workloads affected would be those that access code from the same physical page under different virtual addresses, so different processes using the same libraries with ASLR or multiple instances of PIE-binaries. The code needs to be accessed simultaneously from both cores of the same compute unit. More details can be found here: http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf CPUs affected are anything with the core known as Piledriver. That includes the new parts of the AMD A-Series (aka Trinity) and the just released new CPUs of the FX-Series (aka Vishera). The model numbering is a bit odd here: FX CPUs have model 2, A-Series has model 10h, with possible extensions to 1Fh. Hence the range of model ids. Signed-off-by: Andre Przywara <osp@andrep.de> Link: http://lkml.kernel.org/r/1351700450-9277-1-git-send-email-osp@andrep.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, cpu: Preset default tlb_flushall_shift on AMDBorislav Petkov2012-08-06
| | | | | | | | | Run the mprotect.c microbenchmark on all our families >= K8 and preset the flushall shift variable accordingly. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-5-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu: Add AMD TLB size detectionBorislav Petkov2012-08-06
| | | | | | | | | Read I- and DTLB entries count from CPUID on AMD. Handle all the different family-specific cases. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-4-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu: Rename checking_wrmsrl() to wrmsrl_safe()H. Peter Anvin2012-06-07
| | | | | | | Rename checking_wrmsrl() to wrmsrl_safe(), to match the naming convention used by all the other MSR access functions/macros. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu, amd: Deprecate AMD-specific MSR variantsBorislav Petkov2012-06-07
| | | | | | | | | | | Now that all users of {rd,wr}msr_amd_safe have been fixed, deprecate its use by making them private to amd.c and adding warnings when used on anything else beside K8. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1338562358-28182-5-git-send-email-bp@amd64.org Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, cpu, amd: Fix crash as Xen Dom0 on AMD Trinity systemsAndre Przywara2012-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | f7f286a910221 ("x86/amd: Re-enable CPU topology extensions in case BIOS has disabled it") wrongfully added code which used the AMD-specific {rd,wr}msr variants for no real reason. This caused boot panics on xen which wasn't initializing the {rd,wr}msr_safe_regs pv_ops members properly. This, in turn, caused a heated discussion leading to us reviewing all uses of the AMD-specific variants and removing them where unneeded (almost everywhere except an obscure K8 BIOS fix, see 6b0f43ddfa358). Finally, this patch switches to the standard {rd,wr}msr*_safe* variants which should've been used in the first place anyway and avoided unneeded excitation with xen. Signed-off-by: Andre Przywara <andre.przywara@amd.com> Link: http://lkml.kernel.org/r/1338562358-28182-4-git-send-email-bp@amd64.org Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Link: <http://lkml.kernel.org/r/1338383402-3838-1-git-send-email-andre.przywara@amd.com> [Boris: correct and expand commit message] Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86/amd: Re-enable CPU topology extensions in case BIOS has disabled itAndreas Herrmann2012-04-27
| | | | | | | | | | | | | | | | | BIOS will switch off the corresponding feature flag on family 15h models 10h-1fh non-desktop CPUs. The topology extension CPUID leafs are required to detect which cores belong to the same compute unit. (thread siblings mask is set accordingly and also correct information about L1i and L2 cache sharing depends on this). W/o this patch we wouldn't see which cores belong to the same compute unit and also cache sharing information for L1i and L2 would be incorrect on such systems. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/platform: Remove incorrect error message in x86_default_fixup_cpu_id()Andreas Herrmann2012-04-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's only called from amd.c:srat_detect_node(). The introduced condition for calling the fixup code is true for all AMD multi-node processors, e.g. Magny-Cours and Interlagos. There we have 2 NUMA nodes on one socket. Thus there are cores having different numa-node-id but with equal phys_proc_id. There is no point to print error messages in such a situation. The confusing/misleading error message was introduced with commit 64be4c1c2428e148de6081af235e2418e6a66dda ("x86: Add x86_init platform override to fix up NUMA core numbering"). Remove the default fixup function (especially the error message) and replace it by a NULL pointer check, move the Numascale-specific condition for calling the fixup into the fixup-function itself and slightly adapt the comment. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Acked-by: Borislav Petkov <borislav.petkov@amd.com> Cc: <stable@kernel.org> Cc: <sp@numascale.com> Cc: <bp@amd64.org> Cc: <daniel@numascale-asia.com> Link: http://lkml.kernel.org/r/20120402160648.GR27684@alberich.amd.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/amd: Remove broken links from comment and kernel messageAndreas Herrmann2012-04-16
| | | | | | Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20120411151238.GA4794@alberich.amd.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/sched/perf/AMD: Set sched_clock_stableBorislav Petkov2012-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stephane Eranian reported that doing a scheduler latency measurements with perf on AMD doesn't work out as expected due to the fact that the sched_clock() granularity is too coarse, i.e. done in jiffies due to the sched_clock_stable not set, which, if set, would mean that we get to use the TSC as sample source which would give us much higher precision. However, there's no reason not to set sched_clock_stable on AMD because all families from F10h and upwards do have an invariant TSC and have the CPUID flag to prove (CPUID_8000_0007_EDX[8]). Make it so, #1. Signed-off-by: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@amd64.org> Cc: Venki Pallipadi <venki@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Robert Richter <robert.richter@amd.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20120206132546.GA30854@quad [ Should any non-standard system break the TSC, we should mark them so explicitly, in their platform init handler, or in a DMI quirk. ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Simplify code by removing a !SMP #ifdefs from 'struct cpuinfo_x86'Kevin Winchester2011-12-21
| | | | | | | | | | | | | | | | | | | | | | | Several fields in struct cpuinfo_x86 were not defined for the !SMP case, likely to save space. However, those fields still have some meaning for UP, and keeping them allows some #ifdef removal from other files. The additional size of the UP kernel from this change is not significant enough to worry about keeping up the distinction: text data bss dec hex filename 4737168 506459 972040 6215667 5ed7f3 vmlinux.o.before 4737444 506459 972040 6215943 5ed907 vmlinux.o.after for a difference of 276 bytes for an example UP config. If someone wants those 276 bytes back badly then it should be implemented in a cleaner way. Signed-off-by: Kevin Winchester <kjwinchester@gmail.com> Cc: Steffen Persvold <sp@numascale.com> Link: http://lkml.kernel.org/r/1324428742-12498-1-git-send-email-kjwinchester@gmail.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Add x86_init platform override to fix up NUMA core numberingDaniel J Blueman2011-12-05
| | | | | | | | | | | | | | | | Add an x86_init vector for handling inconsistent core numbering. This is useful for multi-fabric platforms, such as Numascale NumaConnect. v2: - use struct x86_cpuinit_ops - provide default fall-back function to warn Signed-off-by: Daniel J Blueman <daniel@numascale-asia.com> Cc: Steffen Persvold <sp@numascale.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Link: http://lkml.kernel.org/r/1323073238-32686-2-git-send-email-daniel@numascale-asia.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Fix boot failures on older AMD CPU'sLinus Torvalds2011-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | People with old AMD chips are getting hung boots, because commit bcb80e53877c ("x86, microcode, AMD: Add microcode revision to /proc/cpuinfo") moved the microcode detection too early into "early_init_amd()". At that point we are *so* early in the booth that the exception tables haven't even been set up yet, so the whole rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy); doesn't actually work: if the rdmsr does a GP fault (due to non-existant MSR register on older CPU's), we can't fix it up yet, and the boot fails. Fix it by simply moving the code to a slightly later point in the boot (init_amd() instead of early_init_amd()), since the kernel itself doesn't even really care about the microcode patchlevel at this point (or really ever: it's made available to user space in /proc/cpuinfo, and updated if you do a microcode load). Reported-tested-and-bisected-by: Larry Finger <Larry.Finger@lwfinger.net> Tested-by: Bob Tracy <rct@gherkin.frus.com> Acked-by: Borislav Petkov <borislav.petkov@amd.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Fix files explicitly requiring export.h for EXPORT_SYMBOL/THIS_MODULEPaul Gortmaker2011-10-31
| | | | | | | | | | | | | | | | These files were implicitly getting EXPORT_SYMBOL via device.h which was including module.h, but that will be fixed up shortly. By fixing these now, we can avoid seeing things like: arch/x86/kernel/rtc.c:29: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ arch/x86/kernel/pci-dma.c:20: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ arch/x86/kernel/e820.c:69: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [ with input from Randy Dunlap <rdunlap@xenotime.net> and also from Stephen Rothwell <sfr@canb.auug.org.au> ] Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* Merge branch 'x86-microcode-for-linus' of ↵Linus Torvalds2011-10-28
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip * 'x86-microcode-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, microcode, AMD: Add microcode revision to /proc/cpuinfo x86, microcode: Correct microcode revision format coretemp: Get microcode revision from cpu_data x86, intel: Use c->microcode for Atom errata check x86, intel: Output microcode revision in /proc/cpuinfo x86, microcode: Don't request microcode from userspace unnecessarily Fix up trivial conflicts in arch/x86/kernel/cpu/amd.c (conflict between moving AMD BSP code to cpu_dev helper function and adding AMD microcode revision to /proc/cpuinfo code)
| * x86, microcode, AMD: Add microcode revision to /proc/cpuinfoBorislav Petkov2011-10-19
| | | | | | | | | | | | | | Enable microcode revision output for AMD after 506ed6b53e00 ("x86, intel: Output microcode revision in /proc/cpuinfo") did it for Intel. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
* | x86, amd: Include elf.h explicitly, prepare the code for the module.h splitStephen Rothwell2011-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the moduleu.h splitting tree is merged to the latest tip:x86/cpu tree, the x86_64 allmodconfig build fails like this: arch/x86/kernel/cpu/amd.c: In function 'bsp_init_amd': arch/x86/kernel/cpu/amd.c:437:3: error: 'va_align' undeclared (first use in this function) arch/x86/kernel/cpu/amd.c:438:23: error: 'ALIGN_VA_32' undeclared (first use in this function) arch/x86/kernel/cpu/amd.c:438:37: error: 'ALIGN_VA_64' undeclared (first use in this function) This is caused by the module.h split up intreacting with commit dfb09f9b7ab0 ("x86, amd: Avoid cache aliasing penalties on AMD family 15h") from the tip:x86/cpu tree. I have added the following patch for today (this, or something similar, could be applied to the tip tree directly - the export.h include below was added by the module.h splitup). So include elf.h to use va_align and remove this implicit dependency on module.h doing it for us. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Link: http://lkml.kernel.org/r/20110810114956.238d66772883636e3040d29f@canb.auug.org.au Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86, amd: Move BSP code to cpu_dev helperBorislav Petkov2011-08-05
| | | | | | | | | | | | | | | | | | | | | | Move code which is run once on the BSP during boot into the cpu_dev helper. [ hpa: removed bogus cpu_has -> static_cpu_has conversion ] Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/20110805180409.GC26217@aftab Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | x86, amd: Avoid cache aliasing penalties on AMD family 15hBorislav Petkov2011-08-05
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides performance tuning for the "Bulldozer" CPU. With its shared instruction cache there is a chance of generating an excessive number of cache cross-invalidates when running specific workloads on the cores of a compute module. This excessive amount of cross-invalidations can be observed if cache lines backed by shared physical memory alias in bits [14:12] of their virtual addresses, as those bits are used for the index generation. This patch addresses the issue by clearing all the bits in the [14:12] slice of the file mapping's virtual address at generation time, thus forcing those bits the same for all mappings of a single shared library across processes and, in doing so, avoids instruction cache aliases. It also adds the command line option "align_va_addr=(32|64|on|off)" with which virtual address alignment can be enabled for 32-bit or 64-bit x86 individually, or both, or be completely disabled. This change leaves virtual region address allocation on other families and/or vendors unaffected. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, amd: Do not enable ARAT feature on AMD processors below family 0x12Boris Ostrovsky2011-05-26
| | | | | | | | | | | | | | | | | | | | | | | Commit b87cf80af3ba4b4c008b4face3c68d604e1715c6 added support for ARAT (Always Running APIC timer) on AMD processors that are not affected by erratum 400. This erratum is present on certain processor families and prevents APIC timer from waking up the CPU when it is in a deep C state, including C1E state. Determining whether a processor is affected by this erratum may have some corner cases and handling these cases is somewhat complicated. In the interest of simplicity we won't claim ARAT support on processor families below 0x12 and will go back to broadcasting timer when going idle. Signed-off-by: Boris Ostrovsky <ostr@amd64.org> Link: http://lkml.kernel.org/r/1306423192-19774-1-git-send-email-ostr@amd64.org Tested-by: Boris Petkov <borislav.petkov@amd.com> Cc: Hans Rosenfeld <Hans.Rosenfeld@amd.com> Cc: Andreas Herrmann <Andreas.Herrmann3@amd.com> Cc: Chuck Ebbert <cebbert@redhat.com> Cc: stable@kernel.org # 32.x, 38.x, 39.x Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, amd: Use _safe() msr access for GartTlbWlk disable codeRoedel, Joerg2011-05-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The workaround for Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=33012 introduced a read and a write to the MC4 mask msr. Unfortunatly this MSR is not emulated by the KVM hypervisor so that the kernel will get a #GP and crashes when applying this workaround when running inside KVM. This issue was reported as: https://bugzilla.kernel.org/show_bug.cgi?id=35132 and is fixed with this patch. The change just let the kernel ignore any #GP it gets while accessing this MSR by using the _safe msr access methods. Reported-by: Török Edwin <edwintorok@gmail.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> # .39.x Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, AMD: Fix ARAT feature setting againBorislav Petkov2011-05-17
| | | | | | | | | | | | | | | | | | | | Trying to enable the local APIC timer on early K8 revisions uncovers a number of other issues with it, in conjunction with the C1E enter path on AMD. Fixing those causes much more churn and troubles than the benefit of using that timer brings so don't enable it on K8 at all, falling back to the original functionality the kernel had wrt to that. Reported-and-bisected-by: Nick Bowler <nbowler@elliptictech.com> Cc: Boris Ostrovsky <Boris.Ostrovsky@amd.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Greg Kroah-Hartman <greg@kroah.com> Cc: Hans Rosenfeld <hans.rosenfeld@amd.com> Cc: Nick Bowler <nbowler@elliptictech.com> Cc: Joerg-Volker-Peetz <jvpeetz@web.de> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1305636919-31165-3-git-send-email-bp@amd64.org Signed-off-by: Ingo Molnar <mingo@elte.hu>