diff options
author | Daniel Mentz <danielmentz@google.com> | 2016-10-27 20:46:59 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-10-27 21:43:43 -0400 |
commit | 62e931fac45b17c2a42549389879411572f75804 (patch) | |
tree | a66bc11e84beee712de9fd2d4711d722db0edd80 | |
parent | 89a2848381b5fcd9c4d9c0cd97680e3b28730e31 (diff) |
lib/genalloc.c: start search from start of chunk
gen_pool_alloc_algo() iterates over the chunks of a pool trying to find
a contiguous block of memory that satisfies the allocation request.
The shortcut
if (size > atomic_read(&chunk->avail))
continue;
makes the loop skip over chunks that do not have enough bytes left to
fulfill the request. There are two situations, though, where an
allocation might still fail:
(1) The available memory is not contiguous, i.e. the request cannot
be fulfilled due to external fragmentation.
(2) A race condition. Another thread runs the same code concurrently
and is quicker to grab the available memory.
In those situations, the loop calls pool->algo() to search the entire
chunk, and pool->algo() returns some value that is >= end_bit to
indicate that the search failed. This return value is then assigned to
start_bit. The variables start_bit and end_bit describe the range that
should be searched, and this range should be reset for every chunk that
is searched. Today, the code fails to reset start_bit to 0. As a
result, prefixes of subsequent chunks are ignored. Memory allocations
might fail even though there is plenty of room left in these prefixes of
those other chunks.
Fixes: 7f184275aa30 ("lib, Make gen_pool memory allocator lockless")
Link: http://lkml.kernel.org/r/1477420604-28918-1-git-send-email-danielmentz@google.com
Signed-off-by: Daniel Mentz <danielmentz@google.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | lib/genalloc.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/lib/genalloc.c b/lib/genalloc.c index 0a1139644d32..144fe6b1a03e 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c | |||
@@ -292,7 +292,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, | |||
292 | struct gen_pool_chunk *chunk; | 292 | struct gen_pool_chunk *chunk; |
293 | unsigned long addr = 0; | 293 | unsigned long addr = 0; |
294 | int order = pool->min_alloc_order; | 294 | int order = pool->min_alloc_order; |
295 | int nbits, start_bit = 0, end_bit, remain; | 295 | int nbits, start_bit, end_bit, remain; |
296 | 296 | ||
297 | #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG | 297 | #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG |
298 | BUG_ON(in_nmi()); | 298 | BUG_ON(in_nmi()); |
@@ -307,6 +307,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, | |||
307 | if (size > atomic_read(&chunk->avail)) | 307 | if (size > atomic_read(&chunk->avail)) |
308 | continue; | 308 | continue; |
309 | 309 | ||
310 | start_bit = 0; | ||
310 | end_bit = chunk_size(chunk) >> order; | 311 | end_bit = chunk_size(chunk) >> order; |
311 | retry: | 312 | retry: |
312 | start_bit = algo(chunk->bits, end_bit, start_bit, | 313 | start_bit = algo(chunk->bits, end_bit, start_bit, |