| Commit message (Expand) | Author | Age |
* | SLUB: killed the unused "end" variable | Denis Cheng | 2007-11-12 |
* | SLUB: Fix memory leak by not reusing cpu_slab | Christoph Lameter | 2007-11-05 |
* | missing atomic_read_long() in slub.c | Al Viro | 2007-10-29 |
* | memory hotplug: make kmem_cache_node for SLUB on memory online avoid panic | Yasunori Goto | 2007-10-22 |
* | Slab API: remove useless ctor parameter and reorder parameters | Christoph Lameter | 2007-10-17 |
* | SLUB: simplify IRQ off handling | Christoph Lameter | 2007-10-17 |
* | slub: list_locations() can use GFP_TEMPORARY | Andrew Morton | 2007-10-16 |
* | SLUB: Optimize cacheline use for zeroing | Christoph Lameter | 2007-10-16 |
* | SLUB: Place kmem_cache_cpu structures in a NUMA aware way | Christoph Lameter | 2007-10-16 |
* | SLUB: Avoid touching page struct when freeing to per cpu slab | Christoph Lameter | 2007-10-16 |
* | SLUB: Move page->offset to kmem_cache_cpu->offset | Christoph Lameter | 2007-10-16 |
* | SLUB: Do not use page->mapping | Christoph Lameter | 2007-10-16 |
* | SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab | Christoph Lameter | 2007-10-16 |
* | Group short-lived and reclaimable kernel allocations | Mel Gorman | 2007-10-16 |
* | Categorize GFP flags | Christoph Lameter | 2007-10-16 |
* | Memoryless nodes: SLUB support | Christoph Lameter | 2007-10-16 |
* | Slab allocators: fail if ksize is called with a NULL parameter | Christoph Lameter | 2007-10-16 |
* | {slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) check | Satyam Sharma | 2007-10-16 |
* | SLUB: direct pass through of page size or higher kmalloc requests | Christoph Lameter | 2007-10-16 |
* | slub.c:early_kmem_cache_node_alloc() shouldn't be __init | Adrian Bunk | 2007-10-16 |
* | SLUB: accurately compare debug flags during slab cache merge | Christoph Lameter | 2007-09-11 |
* | slub: do not fail if we cannot register a slab with sysfs | Christoph Lameter | 2007-08-31 |
* | SLUB: do not fail on broken memory configurations | Christoph Lameter | 2007-08-22 |
* | SLUB: use atomic_long_read for atomic_long variables | Christoph Lameter | 2007-08-22 |
* | SLUB: Fix dynamic dma kmalloc cache creation | Christoph Lameter | 2007-08-10 |
* | SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrink | Christoph Lameter | 2007-08-10 |
* | slub: fix bug in slub debug support | Peter Zijlstra | 2007-07-30 |
* | slub: add lock debugging check | Peter Zijlstra | 2007-07-30 |
* | mm: Remove slab destructors from kmem_cache_create(). | Paul Mundt | 2007-07-19 |
* | slub: fix ksize() for zero-sized pointers | Linus Torvalds | 2007-07-19 |
* | SLUB: Fix CONFIG_SLUB_DEBUG use for CONFIG_NUMA | Christoph Lameter | 2007-07-17 |
* | SLUB: Move sysfs operations outside of slub_lock | Christoph Lameter | 2007-07-17 |
* | SLUB: Do not allocate object bit array on stack | Christoph Lameter | 2007-07-17 |
* | Slab allocators: Cleanup zeroing allocations | Christoph Lameter | 2007-07-17 |
* | SLUB: Do not use length parameter in slab_alloc() | Christoph Lameter | 2007-07-17 |
* | SLUB: Style fix up the loop to disable small slabs | Christoph Lameter | 2007-07-17 |
* | mm/slub.c: make code static | Adrian Bunk | 2007-07-17 |
* | SLUB: Simplify dma index -> size calculation | Christoph Lameter | 2007-07-17 |
* | SLUB: faster more efficient slab determination for __kmalloc | Christoph Lameter | 2007-07-17 |
* | SLUB: do proper locking during dma slab creation | Christoph Lameter | 2007-07-17 |
* | SLUB: extract dma_kmalloc_cache from get_cache. | Christoph Lameter | 2007-07-17 |
* | SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG | Christoph Lameter | 2007-07-17 |
* | Slab allocators: support __GFP_ZERO in all allocators | Christoph Lameter | 2007-07-17 |
* | Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics | Christoph Lameter | 2007-07-17 |
* | Slab allocators: consolidate code for krealloc in mm/util.c | Christoph Lameter | 2007-07-17 |
* | SLUB Debug: fix initial object debug state of NUMA bootstrap objects | Christoph Lameter | 2007-07-17 |
* | SLUB: ensure that the number of objects per slab stays low for high orders | Christoph Lameter | 2007-07-17 |
* | SLUB slab validation: Move tracking information alloc outside of lock | Christoph Lameter | 2007-07-17 |
* | SLUB: use list_for_each_entry for loops over all slabs | Christoph Lameter | 2007-07-17 |
* | SLUB: change error reporting format to follow lockdep loosely | Christoph Lameter | 2007-07-17 |