diff options
author | Sean O. Stalley <sean.stalley@intel.com> | 2015-09-08 18:02:24 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-09-08 18:35:28 -0400 |
commit | fa23f56d90ed7bd760ae2aea6dfb2f501a099e90 (patch) | |
tree | 3c747a07b47354302ada3df61943552ce09815f8 | |
parent | c54839a722a02818677bcabe57e957f0ce4f841d (diff) |
mm: add support for __GFP_ZERO flag to dma_pool_alloc()
Currently a call to dma_pool_alloc() with a ___GFP_ZERO flag returns a
non-zeroed memory region.
This patchset adds support for the __GFP_ZERO flag to dma_pool_alloc(),
adds 2 wrapper functions for allocing zeroed memory from a pool, and
provides a coccinelle script for finding & replacing instances of
dma_pool_alloc() followed by memset(0) with a single dma_pool_zalloc()
call.
There was some concern that this always calls memset() to zero, instead
of passing __GFP_ZERO into the page allocator.
[https://lkml.org/lkml/2015/7/15/881]
I ran a test on my system to get an idea of how often dma_pool_alloc()
calls into pool_alloc_page().
After Boot: [ 30.119863] alloc_calls:541, page_allocs:7
After an hour: [ 3600.951031] alloc_calls:9566, page_allocs:12
After copying 1GB file onto a USB drive:
[ 4260.657148] alloc_calls:17225, page_allocs:12
It doesn't look like dma_pool_alloc() calls down to the page allocator
very often (at least on my system).
This patch (of 4):
Currently the __GFP_ZERO flag is ignored by dma_pool_alloc().
Make dma_pool_alloc() zero the memory if this flag is set.
Signed-off-by: Sean O. Stalley <sean.stalley@intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Gilles Muller <Gilles.Muller@lip6.fr>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/dmapool.c | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/mm/dmapool.c b/mm/dmapool.c index 4b657099111f..71a8998cd03a 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c | |||
@@ -337,7 +337,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, | |||
337 | /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ | 337 | /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ |
338 | spin_unlock_irqrestore(&pool->lock, flags); | 338 | spin_unlock_irqrestore(&pool->lock, flags); |
339 | 339 | ||
340 | page = pool_alloc_page(pool, mem_flags); | 340 | page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO)); |
341 | if (!page) | 341 | if (!page) |
342 | return NULL; | 342 | return NULL; |
343 | 343 | ||
@@ -375,9 +375,14 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, | |||
375 | break; | 375 | break; |
376 | } | 376 | } |
377 | } | 377 | } |
378 | memset(retval, POOL_POISON_ALLOCATED, pool->size); | 378 | if (!(mem_flags & __GFP_ZERO)) |
379 | memset(retval, POOL_POISON_ALLOCATED, pool->size); | ||
379 | #endif | 380 | #endif |
380 | spin_unlock_irqrestore(&pool->lock, flags); | 381 | spin_unlock_irqrestore(&pool->lock, flags); |
382 | |||
383 | if (mem_flags & __GFP_ZERO) | ||
384 | memset(retval, 0, pool->size); | ||
385 | |||
381 | return retval; | 386 | return retval; |
382 | } | 387 | } |
383 | EXPORT_SYMBOL(dma_pool_alloc); | 388 | EXPORT_SYMBOL(dma_pool_alloc); |