aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/mm
Commit message (Collapse)AuthorAge
* Merge branch 'x86/fixmap' into x86/develIngo Molnar2008-07-08
|\ | | | | | | | | | | | | | | Conflicts: arch/x86/mm/init_64.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: unify __set_fixmap, fixIngo Molnar2008-06-20
| | | | | | | | | | | | | | | | | | | | fix build failure: arch/x86/mm/pgtable.c:280: warning: ‘enum fixed_addresses’ declared inside parameter list arch/x86/mm/pgtable.c:280: warning: its scope is only this definition or declaration, which is probably not what you want arch/x86/mm/pgtable.c:280: error: parameter 1 (‘idx’) has incomplete type Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86/paravirt/xen: add set_fixmap pv_mmu_opsJeremy Fitzhardinge2008-06-20
| | | | | | | | | | | | | | | | Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: implement set_pte_vaddrJeremy Fitzhardinge2008-06-20
| | | | | | | | | | Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: unify __set_fixmapJeremy Fitzhardinge2008-06-20
| | | | | | | | | | | | | | In both cases, I went with the 32-bit behaviour. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | Merge branch 'x86/numa' into x86/develIngo Molnar2008-07-08
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/Kconfig arch/x86/kernel/e820.c arch/x86/kernel/efi_64.c arch/x86/kernel/mpparse.c arch/x86/kernel/setup.c arch/x86/kernel/setup_32.c arch/x86/mm/init_64.c include/asm-x86/proto.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | x86: add flags parameter to reserve_bootmem_generic()Bernhard Walle2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a 'flags' parameter to reserve_bootmem_generic() like it already has been added in reserve_bootmem() with commit 72a7fe3967dbf86cb34e24fbf1d957fe24d2f246. It also changes all users to use BOOTMEM_DEFAULT, which doesn't effectively change the behaviour. Since the change is x86-specific, I don't think it's necessary to add a new API for migration. There are only 4 users of that function. The change is necessary for the next patch, using reserve_bootmem_generic() for crashkernel reservation. Signed-off-by: Bernhard Walle <bwalle@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | x86: numa_64.c fix shadowed variableThomas Gleixner2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | sparse mutters: arch/x86/mm/numa_64.c:195:27: warning: symbol 'end_pfn' shadows an earlier one Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | x86: numa_64.c make local variables staticThomas Gleixner2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | plat_node_bdata, cmdline, nodemap_addr, nodemap_size are local to numa_64.c. Make them static Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | x86: remove the static 256k node_to_cpumask_mapMike Travis2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Consolidate node_to_cpumask operations and remove the 256k byte node_to_cpumask_map. This is done by allocating the node_to_cpumask_map array after the number of possible nodes (nr_node_ids) is known. * Debug printouts when CONFIG_DEBUG_PER_CPU_MAPS is active have been increased. It now shows faults when calling node_to_cpumask() and node_to_cpumask_ptr(). For inclusion into sched-devel/latest tree. Based on: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git + sched-devel/latest .../mingo/linux-2.6-sched-devel.git Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | x86: cleanup early per cpu variables/accesses v4Mike Travis2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Introduce a new PER_CPU macro called "EARLY_PER_CPU". This is used by some per_cpu variables that are initialized and accessed before there are per_cpu areas allocated. ["Early" in respect to per_cpu variables is "earlier than the per_cpu areas have been setup".] This patchset adds these new macros: DEFINE_EARLY_PER_CPU(_type, _name, _initvalue) EXPORT_EARLY_PER_CPU_SYMBOL(_name) DECLARE_EARLY_PER_CPU(_type, _name) early_per_cpu_ptr(_name) early_per_cpu_map(_name, _idx) early_per_cpu(_name, _cpu) The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. The early_per_cpu() macro is not very efficient but does show how to access the variable if you have a function that can be called both "early" and "late". It tests the early ptr to be NULL, and if not then it's still valid. Otherwise, the per_cpu variable is used instead: #define early_per_cpu(_name, _cpu) \ (early_per_cpu_ptr(_name) ? \ early_per_cpu_ptr(_name)[_cpu] : \ per_cpu(_name, _cpu)) A better method is to actually check the pointer manually. In the case below, numa_set_node can be called both "early" and "late": void __cpuinit numa_set_node(int cpu, int node) { int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map); if (cpu_to_node_map) cpu_to_node_map[cpu] = node; else per_cpu(x86_cpu_to_node_map, cpu) = node; } * Add a flag "arch_provides_topology_pointers" that indicates pointers to topology cpumask_t maps are available. Otherwise, use the function returning the cpumask_t value. This is useful if cpumask_t set size is very large to avoid copying data on to/off of the stack. * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while the non-debug case has been optimized a bit. * Remove an unreferenced compiler warning in drivers/base/topology.c * Clean up #ifdef in setup.c For inclusion into sched-devel/latest tree. Based on: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git + sched-devel/latest .../mingo/linux-2.6-sched-devel.git Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | | Merge branch 'x86/mpparse' into x86/develIngo Molnar2008-07-08
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/Kconfig arch/x86/kernel/io_apic_32.c arch/x86/kernel/setup_64.c arch/x86/mm/init_32.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: fix compiling when CONFIG_X86_MPPARSE is not setYinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: merge setup_memory_map with e820Yinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and kill e820_32/64.c and e820_32/64.h Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: kill bad_pproYinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | so don't punish all other cpus without that problem when init highmem Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, mm: use add_highpages_with_active_regions() for high pages init v2Yinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | use early_node_map to init high pages, so we can remove page_is_ram() and page_is_reserved_early() in the big loop with add_one_highpage also remove page_is_reserved_early(), it is not needed anymore. v2: fix the build of other platforms Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: replace shrink_active_range() with remove_active_range()Yinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | in case we have kva before ramdisk on a node, we still need to use those ranges. v2: reserve_early kva ram area, in case there are holes in highmem, to avoid those area could be treat as free high pages. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: clean up reserve_bootmem_generic() and port it to 32-bitYinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. add reserve_bootmem_generic for 32bit 2. change len to unsigned long 3. make early_res_to_bootmem to use it Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: add flags parameter to reserve_bootmem_generic()Bernhard Walle2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a 'flags' parameter to reserve_bootmem_generic() like it already has been added in reserve_bootmem() with commit 72a7fe3967dbf86cb34e24fbf1d957fe24d2f246. It also changes all users to use BOOTMEM_DEFAULT, which doesn't effectively change the behaviour. Since the change is x86-specific, I don't think it's necessary to add a new API for migration. There are only 4 users of that function. The change is necessary for the next patch, using reserve_bootmem_generic() for crashkernel reservation. Signed-off-by: Bernhard Walle <bwalle@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | Merge branch 'linus' into tmp.x86.mpparse.newIngo Molnar2008-07-08
| |\| |
| * | | x86, numa, 32-bit: use find_e820_area() to find KVA RAM on nodeYinghai Lu2008-06-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | don't assume we can use RAM near the end of every node. Esp systems that have few memory and they could have kva address and kva RAM all below max_low_pfn. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | mm, x86: shrink_active_range() should check allYinghai Lu2008-06-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we are using register_e820_active_regions() instead of add_active_range() directly. So end_pfn could be different between the value in early_node_map to node_end_pfn. So we need to make shrink_active_range() smarter. shrink_active_range() is a generic MM function in mm/page_alloc.c but it is only used on 32-bit x86. Should we move it back to some file in arch/x86? Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: reserve highmem pages via reserve_earlyHuang, Ying2008-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes early reserved highmem pages become reserved pages. This can be used for highmem pages allocated by bootloader such as EFI memory map, linked list of setup_data, etc. Signed-off-by: Huang Ying <ying.huang@intel.com> Cc: andi@firstfloor.org Cc: mingo@redhat.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | x86: make 32-bit use e820_register_active_regions()Yinghai Lu2008-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | this way 32-bit is more similar to 64-bit, and smarter e820 and numa. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, numa, 32-bit: make sure get we kva spaceYinghai Lu2008-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when 1/3 user/kernel split is used, and less memory is installed, or if we have a big hole below 4g, max_low_pfn is still using 3g-128m try to go down from max_low_pfn until we get it. otherwise will panic. need to make 32-bit code to use register_e820_active_regions ... later. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: clean up max_pfn_mapped usage - 32-bitYinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on 32-bit in head_32.S after initial page table is done, we get initial max_pfn_mapped, and then kernel_physical_mapping_init will give us a final one. We need to use that to make sure find_e820_area will get valid addresses for boot_map and for NODE_DATA(0) on numa32. XEN PV and lguest may need to assign max_pfn_mapped too. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, numa, 32-bit: avoid clash between ramdisk and kvaYinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | use find_e820_area to get address space... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, numa, 32-bit: print out debug info on all kvasYinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | also fix the print out of node_remap_end_vaddr Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, 32-bit: change propagate_e820_map() back to find_max_pfn()Yinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | we don't need to call memory_present that early. numa and sparse will call memory_present later and might even fail, it will call memory_present for the full range. also for sparse it will call alloc_bootmem ... before we set up bootmem. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: set node_remap_size[0] in fallback pathYinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... otherwise alloc_remap will not get node_mem_map from kva area, and alloc_node_mem_map has to alloc_bootmem_node to get mem_map. It will use two low address copies ... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86, numa, 32-bit: increase max_elements to 1024Yinghai Lu2008-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | so every element will represent 64M instead of 256M. AMD opteron could have HW memory hole remapping, so could have [0, 8g + 64M) on node0. Reduce element size to 64M to keep that on node 0 Later we need to use find_e820_area() to allocate memory_node_map like on 64-bit. But need to move memory_present out of populate_mem_map... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: extend e820 early_res support 32bit -fix #5Yinghai Lu2008-05-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | reserve early numa kva, so it will not clash with new RAMDISK Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: extend e820 early_res support 32bit -fix #4Yinghai Lu2008-05-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | reserve_early pgdata for 32bit numa Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86 boot: remove some unused extern function declarationsPaul Jackson2008-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove three extern declarations for routines that don't exist. Fix a typo in a comment. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | |
| \ \ \
| \ \ \
| \ \ \
| \ \ \
| \ \ \
*-----. \ \ \ Merge branches 'x86/mmio', 'x86/delay', 'x86/idle', 'x86/oprofile', ↵Ingo Molnar2008-07-08
|\ \ \ \ \ \ \ | | |_|_|/ / / | |/| | | | / | | | |_|_|/ | | |/| | | 'x86/debug', 'x86/ptrace' and 'x86/amd-iommu' into x86/devel
| | | * | | x86: small unifications of address printingVegard Nossum2008-07-01
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'man 3 printf' tells me that %p should be printed as if by %#x, but this is not true for the kernel, which does not use the '0x' prefix for the %p conversion specifier. A small cast to (void *) is also prettier than #ifdef/#else/#endif. Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | |
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
| \ \ \ \
*-------------. \ \ \ \ Merge branches 'x86/numa-fixes', 'x86/apic', 'x86/apm', 'x86/bitops', ↵Ingo Molnar2008-07-08
|\ \ \ \ \ \ \ \ \ \ \ \ | | | |_|_|_|_|_|/ / / / | | |/| | | | | | / / / | | | | | | | |_|/ / / | | | | | | |/| | | / | | |_|_|_|_|_|_|_|/ | |/| | | | | | | | 'x86/build', 'x86/cleanups', 'x86/cpa', 'x86/cpu', 'x86/defconfig', 'x86/gart', 'x86/i8259', 'x86/intel', 'x86/irqstats', 'x86/kconfig', 'x86/ldt', 'x86/mce', 'x86/memtest', 'x86/pat', 'x86/ptemask', 'x86/resumetrace', 'x86/threadinfo', 'x86/timers', 'x86/vdso' and 'x86/xen' into x86/devel
| | | | | | | | * | x86: pat.c final cleanup of loop body in reserve_memtypeAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: pat.c introduce function to check for conflicts with existing memtypesAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... to strip down loop body in reserve_memtype. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: pat.c consolidate list_add handling in reserve_memtypeAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: pat.c consolidate error/debug messages in reserve_memtypeAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and move last debug message out of locked section. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: pat.c more trivial changesAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - add BUG statement to catch invalid start and end parameters - No need to track the actual type in both req_type and actual_type -- keep req_type unchanged. - removed (IMHO) superfluous comments Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: pat.c choose more crisp variable namesAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - parse/ml => entry (within list_for_each and friends) - new_entry => new - ret_type => new_type (to avoid confusion with req_type) (... to make it more readable...) Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: introduce macro to check whether an address range is in the ISA rangeAndreas Herrmann2008-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: shrink pat_x_mtrr_type to its essentialsAndreas Herrmann2008-06-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Suresh B Siddha <suresh.b.siddha@intel.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: PAT: make pat_x_mtrr_type() more readableHugh Dickins2008-06-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up over-complications in pat_x_mtrr_type(). And if reserve_memtype() ignores stray req_type bits when pat_enabled, it's better to mask them off when not also. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | Merge branch 'linus' into x86/patIngo Molnar2008-06-16
| | | | | | | | |\ \ | | | | | | | |_|/ / | | | | | | |/| | |
| | | | | | | | * | x86: rename pat_wc_enabled to pat_enabledAndreas Herrmann2008-06-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BTW, what does pat_wc_enabled stand for? Does it mean "write-combining"? Currently it is used to globally switch on or off PAT support. Thus I renamed it to pat_enabled. I think this increases readability (and hope that I didn't miss something). Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: PAT: fixed checkpatch errors (and whitespaces)Andreas Herrmann2008-06-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | x86: PAT: fixed checkpatch errors (and whitespaces) Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | | | | | | * | x86: PAT: fix ambiguous paranoia check in pat_init()Andreas Herrmann2008-06-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Starting with commit 8d4a4300854f3971502e81dacd930704cb88f606 (x86: cleanup PAT cpu validation) the PAT CPU feature flag is not cleared anymore. Now the error message "PAT enabled, but CPU feature cleared" in pat_init() is misleading. Furthermore the current code does not check for existence of the PAT CPU feature flag if a CPU is whitelisted in validate_pat_support. This patch clears pat_wc_enabled if boot CPU has no PAT feature flag and adapts the paranoia check. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>