diff options
author | Dan Williams <dan.j.williams@intel.com> | 2016-03-09 17:08:10 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-03-09 18:43:42 -0500 |
commit | d77a117e6871ff78a06def46583d23752593de60 (patch) | |
tree | 79bf0536897d69bcca3832dd77f4ac1ce6ec034e /kernel/memremap.c | |
parent | 06b241f32c711d7ca868a0351dd97fe91fd8817b (diff) |
list: kill list_force_poison()
Given we have uninitialized list_heads being passed to list_add() it
will always be the case that those uninitialized values randomly trigger
the poison value. Especially since a list_add() operation will seed the
stack with the poison value for later stack allocations to trip over.
For example, see these two false positive reports:
list_add attempted on force-poisoned entry
WARNING: at lib/list_debug.c:34
[..]
NIP [c00000000043c390] __list_add+0xb0/0x150
LR [c00000000043c38c] __list_add+0xac/0x150
Call Trace:
__list_add+0xac/0x150 (unreliable)
__down+0x4c/0xf8
down+0x68/0x70
xfs_buf_lock+0x4c/0x150 [xfs]
list_add attempted on force-poisoned entry(0000000000000500),
new->next == d0000000059ecdb0, new->prev == 0000000000000500
WARNING: at lib/list_debug.c:33
[..]
NIP [c00000000042db78] __list_add+0xa8/0x140
LR [c00000000042db74] __list_add+0xa4/0x140
Call Trace:
__list_add+0xa4/0x140 (unreliable)
rwsem_down_read_failed+0x6c/0x1a0
down_read+0x58/0x60
xfs_log_commit_cil+0x7c/0x600 [xfs]
Fixes: commit 5c2c2587b132 ("mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Eryu Guan <eguan@redhat.com>
Tested-by: Eryu Guan <eguan@redhat.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel/memremap.c')
-rw-r--r-- | kernel/memremap.c | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/kernel/memremap.c b/kernel/memremap.c index b981a7b023f0..778191e3e887 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c | |||
@@ -351,8 +351,13 @@ void *devm_memremap_pages(struct device *dev, struct resource *res, | |||
351 | for_each_device_pfn(pfn, page_map) { | 351 | for_each_device_pfn(pfn, page_map) { |
352 | struct page *page = pfn_to_page(pfn); | 352 | struct page *page = pfn_to_page(pfn); |
353 | 353 | ||
354 | /* ZONE_DEVICE pages must never appear on a slab lru */ | 354 | /* |
355 | list_force_poison(&page->lru); | 355 | * ZONE_DEVICE pages union ->lru with a ->pgmap back |
356 | * pointer. It is a bug if a ZONE_DEVICE page is ever | ||
357 | * freed or placed on a driver-private list. Seed the | ||
358 | * storage with LIST_POISON* values. | ||
359 | */ | ||
360 | list_del(&page->lru); | ||
356 | page->pgmap = pgmap; | 361 | page->pgmap = pgmap; |
357 | } | 362 | } |
358 | devres_add(dev, page_map); | 363 | devres_add(dev, page_map); |