diff options
author | Yinghai Lu <yinghai@kernel.org> | 2013-06-13 16:17:01 -0400 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2013-06-18 12:32:02 -0400 |
commit | d8d386c10630d8f7837700f4c466443d49e12cc0 (patch) | |
tree | 227534edaf07e34e7613c2847674fb0fd8b6c01a /arch/x86 | |
parent | 7d132055814ef17a6c7b69f342244c410a5e000f (diff) |
x86, mtrr: Fix original mtrr range get for mtrr_cleanup
Joshua reported: Commit cd7b304dfaf1 (x86, range: fix missing merge
during add range) broke mtrr cleanup on his setup in 3.9.5.
corresponding commit in upstream is fbe06b7bae7c.
*BAD*gran_size: 64K chunk_size: 16M num_reg: 6 lose cover RAM: -0G
https://bugzilla.kernel.org/show_bug.cgi?id=59491
So it rejects new var mtrr layout.
It turns out we have some problem with initial mtrr range retrieval.
The current sequence is:
x86_get_mtrr_mem_range
==> bunchs of add_range_with_merge
==> bunchs of subract_range
==> clean_sort_range
add_range_with_merge for [0,1M)
sort_range()
add_range_with_merge could have blank slots, so we can not just
sort only, that will have final result have extra blank slot in head.
So move that calling add_range_with_merge for [0,1M), with that we
could avoid extra clean_sort_range calling.
Reported-by: Joshua Covington <joshuacov@googlemail.com>
Tested-by: Joshua Covington <joshuacov@googlemail.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1371154622-8929-2-git-send-email-yinghai@kernel.org
Cc: <stable@vger.kernel.org> v3.9
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/kernel/cpu/mtrr/cleanup.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/kernel/cpu/mtrr/cleanup.c b/arch/x86/kernel/cpu/mtrr/cleanup.c index 35ffda5d0727..5f90b85ff22e 100644 --- a/arch/x86/kernel/cpu/mtrr/cleanup.c +++ b/arch/x86/kernel/cpu/mtrr/cleanup.c | |||
@@ -714,15 +714,15 @@ int __init mtrr_cleanup(unsigned address_bits) | |||
714 | if (mtrr_tom2) | 714 | if (mtrr_tom2) |
715 | x_remove_size = (mtrr_tom2 >> PAGE_SHIFT) - x_remove_base; | 715 | x_remove_size = (mtrr_tom2 >> PAGE_SHIFT) - x_remove_base; |
716 | 716 | ||
717 | nr_range = x86_get_mtrr_mem_range(range, 0, x_remove_base, x_remove_size); | ||
718 | /* | 717 | /* |
719 | * [0, 1M) should always be covered by var mtrr with WB | 718 | * [0, 1M) should always be covered by var mtrr with WB |
720 | * and fixed mtrrs should take effect before var mtrr for it: | 719 | * and fixed mtrrs should take effect before var mtrr for it: |
721 | */ | 720 | */ |
722 | nr_range = add_range_with_merge(range, RANGE_NUM, nr_range, 0, | 721 | nr_range = add_range_with_merge(range, RANGE_NUM, 0, 0, |
723 | 1ULL<<(20 - PAGE_SHIFT)); | 722 | 1ULL<<(20 - PAGE_SHIFT)); |
724 | /* Sort the ranges: */ | 723 | /* add from var mtrr at last */ |
725 | sort_range(range, nr_range); | 724 | nr_range = x86_get_mtrr_mem_range(range, nr_range, |
725 | x_remove_base, x_remove_size); | ||
726 | 726 | ||
727 | range_sums = sum_ranges(range, nr_range); | 727 | range_sums = sum_ranges(range, nr_range); |
728 | printk(KERN_INFO "total RAM covered: %ldM\n", | 728 | printk(KERN_INFO "total RAM covered: %ldM\n", |