diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2013-07-14 18:14:29 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-07-14 18:14:29 -0400 |
commit | 54be8200198ddfc6cb396720460c19881fac2d5a (patch) | |
tree | 58ccab6e0cfb35b30e7e16804f15fe9c94628f12 /mm/slob.c | |
parent | 41d9884c44237cd66e2bdbc412028b29196b344c (diff) | |
parent | c25f195e828f847735c7626b5693ddc3b853d245 (diff) |
Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux
Pull slab update from Pekka Enberg:
"Highlights:
- Fix for boot-time problems on some architectures due to
init_lock_keys() not respecting kmalloc_caches boundaries
(Christoph Lameter)
- CONFIG_SLUB_CPU_PARTIAL requested by RT folks (Joonsoo Kim)
- Fix for excessive slab freelist draining (Wanpeng Li)
- SLUB and SLOB cleanups and fixes (various people)"
I ended up editing the branch, and this avoids two commits at the end
that were immediately reverted, and I instead just applied the oneliner
fix in between myself.
* 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux
slub: Check for page NULL before doing the node_match check
mm/slab: Give s_next and s_stop slab-specific names
slob: Check for NULL pointer before calling ctor()
slub: Make cpu partial slab support configurable
slab: add kmalloc() to kernel API documentation
slab: fix init_lock_keys
slob: use DIV_ROUND_UP where possible
slub: do not put a slab to cpu partial list when cpu_partial is 0
mm/slub: Use node_nr_slabs and node_nr_objs in get_slabinfo
mm/slub: Drop unnecessary nr_partials
mm/slab: Fix /proc/slabinfo unwriteable for slab
mm/slab: Sharing s_next and s_stop between slab and slub
mm/slab: Fix drain freelist excessively
slob: Rework #ifdeffery in slab.h
mm, slab: moved kmem_cache_alloc_node comment to correct place
Diffstat (limited to 'mm/slob.c')
-rw-r--r-- | mm/slob.c | 4 |
1 files changed, 2 insertions, 2 deletions
@@ -122,7 +122,7 @@ static inline void clear_slob_page_free(struct page *sp) | |||
122 | } | 122 | } |
123 | 123 | ||
124 | #define SLOB_UNIT sizeof(slob_t) | 124 | #define SLOB_UNIT sizeof(slob_t) |
125 | #define SLOB_UNITS(size) (((size) + SLOB_UNIT - 1)/SLOB_UNIT) | 125 | #define SLOB_UNITS(size) DIV_ROUND_UP(size, SLOB_UNIT) |
126 | 126 | ||
127 | /* | 127 | /* |
128 | * struct slob_rcu is inserted at the tail of allocated slob blocks, which | 128 | * struct slob_rcu is inserted at the tail of allocated slob blocks, which |
@@ -554,7 +554,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node) | |||
554 | flags, node); | 554 | flags, node); |
555 | } | 555 | } |
556 | 556 | ||
557 | if (c->ctor) | 557 | if (b && c->ctor) |
558 | c->ctor(b); | 558 | c->ctor(b); |
559 | 559 | ||
560 | kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags); | 560 | kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags); |