aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h
diff options
context:
space:
mode:
authorChristoph Lameter <clameter@sgi.com>2008-02-14 17:28:01 -0500
committerChristoph Lameter <christoph@stapp.engr.sgi.com>2008-02-14 18:30:01 -0500
commit71c7a06ff0a2ba0434ace4d7aa679537c4211d9d (patch)
tree99f5a2a5e27eee88f9917d207e2849aac3ba7e62 /include/linux/slub_def.h
parentb7a49f0d4c34166ae84089d9f145cfaae1b0eec5 (diff)
slub: Fallback to kmalloc_large for failing higher order allocs
Slub already has two ways of allocating an object. One is via its own logic and the other is via the call to kmalloc_large to hand off object allocation to the page allocator. kmalloc_large is typically used for objects >= PAGE_SIZE. We can use that handoff to avoid failing if a higher order kmalloc slab allocation cannot be satisfied by the page allocator. If we reach the out of memory path then simply try a kmalloc_large(). kfree() can already handle the case of an object that was allocated via the page allocator and so this will work just fine (apart from object accounting...). For any kmalloc slab that already requires higher order allocs (which makes it impossible to use the page allocator fastpath!) we just use PAGE_ALLOC_COSTLY_ORDER to get the largest number of objects in one go from the page allocator slowpath. On a 4k platform this patch will lead to the following use of higher order pages for the following kmalloc slabs: 8 ... 1024 order 0 2048 .. 4096 order 3 (4k slab only after the next patch) We may waste some space if fallback occurs on a 2k slab but we are always able to fallback to an order 0 alloc. Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Christoph Lameter <clameter@sgi.com>
Diffstat (limited to 'include/linux/slub_def.h')
0 files changed, 0 insertions, 0 deletions