diff options
author | Tejun Heo <tj@kernel.org> | 2011-03-04 04:26:36 -0500 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2011-03-04 04:26:36 -0500 |
commit | f89112502805c1f6a6955f90ad158e538edb319d (patch) | |
tree | c8fff8bdf2a2297e92e78b8661c9e8b405a7b304 /arch/x86/mm/numa_64.c | |
parent | eb8c1e2c830fc25c93bc94e215ed387fe142a98d (diff) |
x86-64, NUMA: Revert NUMA affine page table allocation
This patch reverts NUMA affine page table allocation added by commit
1411e0ec31 (x86-64, numa: Put pgtable to local node memory).
The commit made an undocumented change where the kernel linear mapping
strictly follows intersection of e820 memory map and NUMA
configuration. If the physical memory configuration has holes or NUMA
nodes are not properly aligned, this leads to using unnecessarily
smaller mapping size which leads to increased TLB pressure. For
details,
http://thread.gmane.org/gmane.linux.kernel/1104672
Patches to fix the problem have been proposed but the underlying code
needs more cleanup and the approach itself seems a bit heavy handed
and it has been determined to revert the feature for now and come back
to it in the next developement cycle.
http://thread.gmane.org/gmane.linux.kernel/1105959
As init_memory_mapping_high() callsites have been consolidated since
the commit, reverting is done manually. Also, the RED-PEN comment in
arch/x86/mm/init.c is not restored as the problem no longer exists
with memblock based top-down early memory allocation.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'arch/x86/mm/numa_64.c')
-rw-r--r-- | arch/x86/mm/numa_64.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/arch/x86/mm/numa_64.c b/arch/x86/mm/numa_64.c index 74064e8ae79..86491ba568d 100644 --- a/arch/x86/mm/numa_64.c +++ b/arch/x86/mm/numa_64.c | |||
@@ -543,8 +543,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) | |||
543 | if (!numa_meminfo_cover_memory(mi)) | 543 | if (!numa_meminfo_cover_memory(mi)) |
544 | return -EINVAL; | 544 | return -EINVAL; |
545 | 545 | ||
546 | init_memory_mapping_high(); | ||
547 | |||
548 | /* Finally register nodes. */ | 546 | /* Finally register nodes. */ |
549 | for_each_node_mask(nid, node_possible_map) { | 547 | for_each_node_mask(nid, node_possible_map) { |
550 | u64 start = (u64)max_pfn << PAGE_SHIFT; | 548 | u64 start = (u64)max_pfn << PAGE_SHIFT; |