diff options
author | Paul Mundt <lethal@linux-sh.org> | 2008-06-12 03:29:55 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-06-12 10:56:17 -0400 |
commit | 5a1603be58f11edb1b30cb1e40cfbdd4439289d0 (patch) | |
tree | 822e2711385da42bd3b53472b74ee99963913915 /mm | |
parent | f969c5672b16b857e5231ad3c78f08d8ef3305aa (diff) |
nommu: Correct kobjsize() page validity checks.
This implements a few changes on top of the recent kobjsize() refactoring
introduced by commit 6cfd53fc03670c7a544a56d441eb1a6cc800d72b.
As Christoph points out:
virt_to_head_page cannot return NULL. virt_to_page also
does not return NULL. pfn_valid() needs to be used to
figure out if a page is valid. Otherwise the page struct
reference that was returned may have PageReserved() set
to indicate that it is not a valid page.
As discussed further in the thread, virt_addr_valid() is the preferable
way to validate the object pointer in this case. In addition to fixing
up the reserved page case, it also has the benefit of encapsulating the
hack introduced by commit 4016a1390d07f15b267eecb20e76a48fd5c524ef on
the impacted platforms, allowing us to get rid of the extra checking in
kobjsize() for the platforms that don't perform this type of bizarre
memory_end abuse (every nommu platform that isn't blackfin). If blackfin
decides to get in line with every other platform and use PageReserved
for the DMA pages in question, kobjsize() will also continue to work
fine.
It also turns out that compound_order() will give us back 0-order for
non-head pages, so we can get rid of the PageCompound check and just
use compound_order() directly. Clean that up while we're at it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Reviewed-by: Christoph Lameter <clameter@sgi.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/nommu.c | 21 |
1 files changed, 3 insertions, 18 deletions
diff --git a/mm/nommu.c b/mm/nommu.c index 3abd0845bda..4462b6a3fcb 100644 --- a/mm/nommu.c +++ b/mm/nommu.c | |||
@@ -104,21 +104,15 @@ EXPORT_SYMBOL(vmtruncate); | |||
104 | unsigned int kobjsize(const void *objp) | 104 | unsigned int kobjsize(const void *objp) |
105 | { | 105 | { |
106 | struct page *page; | 106 | struct page *page; |
107 | int order = 0; | ||
108 | 107 | ||
109 | /* | 108 | /* |
110 | * If the object we have should not have ksize performed on it, | 109 | * If the object we have should not have ksize performed on it, |
111 | * return size of 0 | 110 | * return size of 0 |
112 | */ | 111 | */ |
113 | if (!objp) | 112 | if (!objp || !virt_addr_valid(objp)) |
114 | return 0; | ||
115 | |||
116 | if ((unsigned long)objp >= memory_end) | ||
117 | return 0; | 113 | return 0; |
118 | 114 | ||
119 | page = virt_to_head_page(objp); | 115 | page = virt_to_head_page(objp); |
120 | if (!page) | ||
121 | return 0; | ||
122 | 116 | ||
123 | /* | 117 | /* |
124 | * If the allocator sets PageSlab, we know the pointer came from | 118 | * If the allocator sets PageSlab, we know the pointer came from |
@@ -129,18 +123,9 @@ unsigned int kobjsize(const void *objp) | |||
129 | 123 | ||
130 | /* | 124 | /* |
131 | * The ksize() function is only guaranteed to work for pointers | 125 | * The ksize() function is only guaranteed to work for pointers |
132 | * returned by kmalloc(). So handle arbitrary pointers, that we expect | 126 | * returned by kmalloc(). So handle arbitrary pointers here. |
133 | * always to be compound pages, here. | ||
134 | */ | ||
135 | if (PageCompound(page)) | ||
136 | order = compound_order(page); | ||
137 | |||
138 | /* | ||
139 | * Finally, handle arbitrary pointers that don't set PageSlab. | ||
140 | * Default to 0-order in the case when we're unable to ksize() | ||
141 | * the object. | ||
142 | */ | 127 | */ |
143 | return PAGE_SIZE << order; | 128 | return PAGE_SIZE << compound_order(page); |
144 | } | 129 | } |
145 | 130 | ||
146 | /* | 131 | /* |