aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* x86: use __KERNEL_DS as SS when returning to a kernel threadJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | This is needed when the kernel is running on RING3, such as under Xen. x86_64 has a weird feature that makes it #GP on iret when SS is a null descriptor. This need to be tested on bare metal to make sure it doesn't cause any problems. AMD specs say SS is always ignored (except on iret?). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: save %fs and %gs before load_TLS() and arch_leave_lazy_cpu_mode()Jeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | We must do this because load_TLS() may need to clear %fs and %gs. (e.g. under Xen). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: __switch_to(): move arch_leave_lazy_cpu_mode() to the right placeJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | We must leave lazy mode before switching the %fs and %gs selectors. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: split set_pte_vaddr()Eduardo Habkost2008-07-08
| | | | | | | | | | | | | | | We will need to set a pte on l3_user_pgt. Extract set_pte_vaddr_pud() from set_pte_vaddr(), that will accept the l3 page table as parameter. This change should be a no-op for existing code. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: PSE no longer a hard requirementJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | Because Xen doesn't support PSE mappings in guests, all code which assumed the presence of PSE has been changed to fall back to smaller mappings if necessary. As a result, PSE is optional rather than required (though still used whereever possible). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: create small vmemmap mappings if PSE not availableJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | If PSE is not available, then fall back to 4k page mappings for the vmemmap area. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: adjust mapping of physical pagetables to work with XenJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes a few of changes to the construction of the initial pagetables to work better with paravirt_ops/Xen. The main areas are: 1. Support non-PSE mapping of memory, since Xen doesn't currently allow 2M pages to be mapped in guests. 2. Make sure that the ioremap alias of all pages are dropped before attaching the new page to the pagetable. This avoids having writable aliases of pagetable pages. 3. Preserve existing pagetable entries, rather than overwriting. Its possible that a fair amount of pagetable has already been constructed, so reuse what's already in place rather than ignoring and overwriting it. The algorithm relies on the invariant that any page which is part of the kernel pagetable is itself mapped in the linear memory area. This way, it can avoid using ioremap on a pagetable page. The invariant holds because it maps memory from low to high addresses, and also allocates memory from low to high. Each allocated page can map at least 2M of address space, so the mapped area will always progress much faster than the allocated area. It relies on the early boot code mapping enough pages to get started. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: split x86_64_start_kernelJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | Split x86_64_start_kernel() into two pieces: The first essentially cleans up after head_64.S. It clears the bss, zaps low identity mappings, sets up some early exception handlers. The second part preserves the boot data, reserves the kernel's text/data/bss, pagetables and ramdisk, and then starts the kernel proper. This split is so that Xen can call the second part to do the set up it needs done. It doesn't need any of the first part setups, because it doesn't boot via head_64.S, and its redundant or actively damaging. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: add FIX_PARAVIRT_BOOTMAP fixmap slotJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | This matches 32 bit. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* paravirt/x86, 64-bit: move __PAGE_OFFSET to leave a space for hypervisorEduardo Habkost2008-07-08
| | | | | | | | | | | | | | | | | | | Set __PAGE_OFFSET to the most negative possible address + 16*PGDIR_SIZE. The gap is to allow a space for a hypervisor to fit. The gap is more or less arbitrary, but it's what Xen needs. When booting native, kernel/head_64.S has a set of compile-time generated pagetables used at boot time. This patch removes their absolutely hard-coded layout, and makes it parameterised on __PAGE_OFFSET (and __START_KERNEL_map). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt: define PARA_INDIRECT for indirect asm callsJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | On 32-bit it's best to use a %cs: prefix to access memory where the other segments may not bet set up properly yet. On 64-bit it's best to use a rip-relative addressing mode. Define PARA_INDIRECT() to abstract this and generate the proper addressing mode in each case. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt: add debugging for missing operationsJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | Rather than just jumping to 0 when there's a missing operation, raise a BUG. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: preallocate and prepopulate separatelyJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jan Beulich points out that vmalloc_sync_all() assumes that the kernel's pmd is always expected to be present in the pgd. The current pgd construction code will add the pgd to the pgd_list before its pmds have been pre-populated, thereby making it visible to vmalloc_sync_all(). However, because pgd_prepopulate_pmd also does the allocation, it may block and cannot be done under spinlock. The solution is to preallocate the pmds out of the spinlock, then populate them while holding the pgd_list lock. This patch also pulls the pmd preallocation and mop-up functions out to be common, assuming that the compiler will generate no code for them when PREALLOCTED_PMDS is 0. Also, there's no need for pgd_ctor to clear the pgd again, since it's allocated as a zeroed page. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt: add a pgd_alloc/free hooksJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | Add hooks which are called at pgd_alloc/free time. The pgd_alloc hook may return an error code, which if non-zero, causes the pgd allocation to be failed. The hooks may be used to allocate/free auxillary per-pgd information. also fix: > * Ingo Molnar <mingo@elte.hu> wrote: > > include/asm/pgalloc.h: In function ‘paravirt_pgd_free': > include/asm/pgalloc.h:14: error: parameter name omitted > arch/x86/kernel/entry_64.S: In file included from > arch/x86/kernel/traps_64.c:51:include/asm/pgalloc.h: In function ‘paravirt_pgd_free': > include/asm/pgalloc.h:14: error: parameter name omitted Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: simplify vmalloc_sync_allJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | | | | | | | vmalloc_sync_all() is only called from register_die_notifier and alloc_vm_area. Neither is on any performance-critical paths, so vmalloc_sync_all() itself is not on any hot paths. Given that the optimisations in vmalloc_sync_all add a fair amount of code and complexity, and are fairly hard to evaluate for correctness, it's better to just remove them to simplify the code rather than worry about its absolute performance. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: add sync_cmpxchgJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | Add sync_cmpxchg to match 32-bit's sync_cmpxchg. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: add prototype for x86_64_start_kernel()Jeremy Fitzhardinge2008-07-08
| | | | | | | | | Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: build fixIngo Molnar2008-07-08
| | | | | | | | | | | | fix: In file included from arch/x86/kernel/setup.c:118: include/asm/highmem.h:64: error: expected identifier or ‘(' before ‘do' include/asm/highmem.h:64: error: expected identifier or ‘(' before ‘while' include/asm/highmem.h:67: error: expected identifier or ‘(' before ‘do' include/asm/highmem.h:67: error: expected identifier or ‘(' before ‘while' Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: remove extra newline from setup.cIngo Molnar2008-07-08
| | | | Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: we only have init_pg_tables_end for 32bitYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change some functions in setup.c to staticYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: make x86_find_smp_config depends on 64 bit tooYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move parse elfvorehdr back to setup.cYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move reserve_standard_io_resources back to setup.cYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move back crashkernel back to setup.cYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move parse_setup_data back to setup.cYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move boot_params back to setup.cYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: rename setup_32.c to setup.cYinghai Lu2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | and let 64 bit use that instead of setup_64.c [ mingo@elte.hu ] x86: build fix fix: arch/x86/kernel/setup.c: In function ‘setup_arch': arch/x86/kernel/setup.c:561: error: implicit declaration of function ‘efi_reserve_early' and: arch/x86/kernel/setup.c:766: error: implicit declaration of function 'init_cpu_to_node' and: arch/x86/kernel/setup.c:676: warning: operation on 'max_pfn_mapped' may be undefined Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: space to tab in setup_archYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge 64bit setup_arch into setup_32Yinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: add extra includes for 64bit supportYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: put global variable for 32bit all togetherYinghai Lu2008-07-08
| | | | | | | those variables are not needed by 64 bit. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: update reserve_initrd to support 64bitYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: we can use full bootmem after have init_memory_mappingYinghai Lu2008-07-08
| | | | | | | So remove outdated comments Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: rename setup.c to setup_percpu.cYinghai Lu2008-07-08
| | | | | | | | some functions need to be moved to setup_numa.c after we merge setup32/64.c, some funcs need to be moved back to setup.c Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix memory setup bugYinghai Lu2008-07-08
| | | | | | | | | | | | | | | | interesting... [ 0.000000] mapped low ram: 0 - 20000000 [ 0.000000] low ram: 00000000 - 1fff0000 [ 0.000000] bootmap 00002000 - 00006000 max_pfn_mapped > max_low_pfn? it seems init_memory_mapping reveals an old bug. please check attached test patch. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, crashdump, /proc/vmcore: remove CONFIG_EXPERIMENTAL from kdumpBernhard Walle2008-07-08
| | | | | | | | | | | | | I would suggest to remove the "experimental" status from Kdump. Kdump is now in the kernel since a long time and used by Enterprise distributions. I don't think that "experimental" is true any more. Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: vgoyal@redhat.com Cc: kexec@lists.infradead.org Cc: Bernhard Walle <bwalle@suse.de> Cc: akpm@linux-foundation.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86 boot: only pick up additional EFI memmap if add_efi_memmap flagPaul Jackson2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Applies on top of the previous patch: x86 boot: add code to add BIOS provided EFI memory entries to kernel Instead of always adding EFI memory map entries (if present) to the memory map after initially finding either E820 BIOS memory map entries and/or kernel command line memmap entries, -instead- only add such additional EFI memory map entries if the kernel boot option: add_efi_memmap is specified. Requiring this 'add_efi_memmap' option is backward compatible with kernels that didn't load such additional EFI memory map entries in the first place, and it doesn't override a configuration that tries to replace all E820 or EFI BIOS memory map entries with ones given entirely on the kernel command line. Signed-off-by: Paul Jackson <pj@sgi.com> Cc: "Yinghai Lu" <yhlu.kernel@gmail.com> Cc: "Jack Steiner" <steiner@sgi.com> Cc: "Mike Travis" <travis@sgi.com> Cc: "Huang Cc: Ying" <ying.huang@intel.com> Cc: "Andi Kleen" <andi@firstfloor.org> Cc: "Andrew Morton" <akpm@linux-foundation.org> Cc: Paul Jackson <pj@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* mm, generic, x86 boot: more tweaks to hex prints of some pfn addressesPaul Jackson2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | Fix some problems with (and applies on top of) a previous patch: x86 boot: show pfn addresses in hex not decimal in some kernel info printks Primarily change "0x%8lx" format, which displays with a right aligned space filled hex number (spaces between the "0x" prefix and the number), into "%0#10lx" format, which zero fills instead of space fills, and which uses the printf flag '#' to request the "0x" prefix instead of hard coding it. Also replace some other "0x%lx" formats with "%#lx", making use of the '#' printf flag again. Signed-off-by: Paul Jackson <pj@sgi.com> Cc: "Yinghai Lu" <yhlu.kernel@gmail.com> Cc: "Jack Steiner" <steiner@sgi.com> Cc: "Mike Travis" <travis@sgi.com> Cc: "Huang Cc: Ying" <ying.huang@intel.com> Cc: "Andi Kleen" <andi@firstfloor.org> Cc: "Andrew Morton" <akpm@linux-foundation.org> Cc: Paul Jackson <pj@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: cleanup e820_setup_gap(), add e820_search_gap(), v2Alok Kataria2008-07-08
| | | | | | | | | | | | | | | | This is a preparatory patch for the next patch in series. Moves some code from e820_setup_gap to a new function e820_search_gap. This patch is a part of a bug fix where we walk the ACPI table to calculate a gap for PCI optional devices. v1->v2: Patch on top of tip/master. Fixes a bug introduced in the last patch about the typeof "last". Also the new function e820_search_gap now returns if we found a gap in e820_map. Signed-off-by: Alok N Kataria <akataria@vmware.com> Cc: lenb@kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: remove end_pfn in 64bitYinghai Lu2008-07-08
| | | | | | | and use max_pfn directly. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: numa 32 using apicid_2_node to get node for logical_apicidYinghai Lu2008-07-08
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change size if e820_update/remove_rangeYinghai Lu2008-07-08
| | | | | | | in case someone using crazy parameter while calling them. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: add table_top check for alloc_low_page in 64 bitYinghai Lu2008-07-08
| | | | | | | | | | | | that range is from find_e820_area, so don't try to use end_pfn to see if out of boundary...use table_top instead to avoid possible strange result while cross the boundary... also change early_printk to printk, because init_memory_mapping is after early param parsing, and console=uart8250 already working at that time. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: get max_pfn_mapped in init_memory_mappingYinghai Lu2008-07-08
| | | | | | | so don't shift that in the loop Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix e820_update_range size when overlappingYinghai Lu2008-07-08
| | | | | | | | | | | | | before that we relay on sanitize_e820_map to remove the overlap. but e820_update_range(,,E820_RESERVED, E820_RAM) will not work this patch fix that who is going to use this? Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: introduce init_memory_mapping for 32bit #3Yinghai Lu2008-07-08
| | | | | | | move kva related early backto initmem_init for numa32 Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: introduce init_memory_mapping for 32bit #2Yinghai Lu2008-07-08
| | | | | | | moving relocate_initrd early Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: introduce init_memory_mapping for 32bit #1Yinghai Lu2008-07-08
| | | | | | | | | | | | | ... so can we use mem below max_low_pfn earlier. this allows us to move several functions more early instead of waiting to after paging_init. That includes moving relocate_initrd() earlier in the bootup, and kva related early setup done in initmem_init. (in followup patches) Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: unify mmu_context.hJeremy Fitzhardinge2008-07-08
| | | | | | | | | | | | Some amount of asm-x86/mmu_context.h can be unified, including activate_mm paravirt hook. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>