diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2009-09-16 17:28:03 -0400 |
---|---|---|
committer | H. Peter Anvin <hpa@zytor.com> | 2009-09-17 17:07:58 -0400 |
commit | dcb73bf402e0d5b28ce925dbbe4dab3b00b21eee (patch) | |
tree | 954629665661e5dfa763e32436736ee91fe7ba21 /arch/x86/mm | |
parent | fa526d0d641b5365676a1fb821ce359e217c9b85 (diff) |
x86, pat: don't use rb-tree based lookup in reserve_memtype()
Recent enhancement of rb-tree based lookup exposed a bug with the lookup
mechanism in the reserve_memtype() which ensures that there are no conflicting
memtype requests for the memory range.
memtype_rb_search() returns an entry which has a start address <= new start
address. And from here we traverse the linear linked list to check if there
any conflicts with the existing mappings. As the rbtree is based on the
start address of the memory range, it is quite possible that we have several
overlapped mappings whose start address is much less than new requested start
but the end is >= new requested end. This results in conflicting memtype
mappings.
Same bug exists with the old code which uses cached_entry from where
we traverse the linear linked list. But the new rb-tree code exposes this
bug fairly easily.
For now, don't use the memtype_rb_search() and always start the search from
the head of linear linked list in reserve_memtype(). Linear linked list
for most of the systems grow's to few 10's of entries(as we track memory type
of RAM pages using struct page). So we should be ok for now.
We still retain the rbtree and use it to speed up free_memtype() which
doesn't have the same bug(as we know what exactly we are searching for
in free_memtype).
Also use list_for_each_entry_from() in free_memtype() so that we start
the search from rb-tree lookup result.
Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
LKML-Reference: <1253136483.4119.12.camel@sbs-t61.sc.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'arch/x86/mm')
-rw-r--r-- | arch/x86/mm/pat.c | 12 |
1 files changed, 2 insertions, 10 deletions
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c index d2a72abc9de1..9b647f679389 100644 --- a/arch/x86/mm/pat.c +++ b/arch/x86/mm/pat.c | |||
@@ -424,17 +424,9 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type, | |||
424 | 424 | ||
425 | spin_lock(&memtype_lock); | 425 | spin_lock(&memtype_lock); |
426 | 426 | ||
427 | entry = memtype_rb_search(&memtype_rbroot, new->start); | ||
428 | if (likely(entry != NULL)) { | ||
429 | /* To work correctly with list_for_each_entry_continue */ | ||
430 | entry = list_entry(entry->nd.prev, struct memtype, nd); | ||
431 | } else { | ||
432 | entry = list_entry(&memtype_list, struct memtype, nd); | ||
433 | } | ||
434 | |||
435 | /* Search for existing mapping that overlaps the current range */ | 427 | /* Search for existing mapping that overlaps the current range */ |
436 | where = NULL; | 428 | where = NULL; |
437 | list_for_each_entry_continue(entry, &memtype_list, nd) { | 429 | list_for_each_entry(entry, &memtype_list, nd) { |
438 | if (end <= entry->start) { | 430 | if (end <= entry->start) { |
439 | where = entry->nd.prev; | 431 | where = entry->nd.prev; |
440 | break; | 432 | break; |
@@ -532,7 +524,7 @@ int free_memtype(u64 start, u64 end) | |||
532 | * in sorted start address | 524 | * in sorted start address |
533 | */ | 525 | */ |
534 | saved_entry = entry; | 526 | saved_entry = entry; |
535 | list_for_each_entry(entry, &memtype_list, nd) { | 527 | list_for_each_entry_from(entry, &memtype_list, nd) { |
536 | if (entry->start == start && entry->end == end) { | 528 | if (entry->start == start && entry->end == end) { |
537 | rb_erase(&entry->rb, &memtype_rbroot); | 529 | rb_erase(&entry->rb, &memtype_rbroot); |
538 | list_del(&entry->nd); | 530 | list_del(&entry->nd); |