diff options
author | Joonsoo Kim <js1304@gmail.com> | 2012-05-16 11:13:02 -0400 |
---|---|---|
committer | Pekka Enberg <penberg@kernel.org> | 2012-05-18 05:23:36 -0400 |
commit | 02d7633fa567be7bf55a993b79d2a31b95ce2227 (patch) | |
tree | 31df09df6c946f9b8edc909648e0a1c33dae794c /mm | |
parent | 4053497d6a37715f4b20dcc180a52717b4c8ffba (diff) |
slub: fix a memory leak in get_partial_node()
In the case which is below,
1. acquire slab for cpu partial list
2. free object to it by remote cpu
3. page->freelist = t
then memory leak is occurred.
Change acquire_slab() not to zap freelist when it works for cpu partial list.
I think it is a sufficient solution for fixing a memory leak.
Below is output of 'slabinfo -r kmalloc-256'
when './perf stat -r 30 hackbench 50 process 4000 > /dev/null' is done.
***Vanilla***
Sizes (bytes) Slabs Debug Memory
------------------------------------------------------------------------
Object : 256 Total : 468 Sanity Checks : Off Total: 3833856
SlabObj: 256 Full : 111 Redzoning : Off Used : 2004992
SlabSiz: 8192 Partial: 302 Poisoning : Off Loss : 1828864
Loss : 0 CpuSlab: 55 Tracking : Off Lalig: 0
Align : 8 Objects: 32 Tracing : Off Lpadd: 0
***Patched***
Sizes (bytes) Slabs Debug Memory
------------------------------------------------------------------------
Object : 256 Total : 300 Sanity Checks : Off Total: 2457600
SlabObj: 256 Full : 204 Redzoning : Off Used : 2348800
SlabSiz: 8192 Partial: 33 Poisoning : Off Loss : 108800
Loss : 0 CpuSlab: 63 Tracking : Off Lalig: 0
Align : 8 Objects: 32 Tracing : Off Lpadd: 0
Total and loss number is the impact of this patch.
Cc: <stable@vger.kernel.org>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 9 |
1 files changed, 6 insertions, 3 deletions
@@ -1514,15 +1514,19 @@ static inline void *acquire_slab(struct kmem_cache *s, | |||
1514 | freelist = page->freelist; | 1514 | freelist = page->freelist; |
1515 | counters = page->counters; | 1515 | counters = page->counters; |
1516 | new.counters = counters; | 1516 | new.counters = counters; |
1517 | if (mode) | 1517 | if (mode) { |
1518 | new.inuse = page->objects; | 1518 | new.inuse = page->objects; |
1519 | new.freelist = NULL; | ||
1520 | } else { | ||
1521 | new.freelist = freelist; | ||
1522 | } | ||
1519 | 1523 | ||
1520 | VM_BUG_ON(new.frozen); | 1524 | VM_BUG_ON(new.frozen); |
1521 | new.frozen = 1; | 1525 | new.frozen = 1; |
1522 | 1526 | ||
1523 | } while (!__cmpxchg_double_slab(s, page, | 1527 | } while (!__cmpxchg_double_slab(s, page, |
1524 | freelist, counters, | 1528 | freelist, counters, |
1525 | NULL, new.counters, | 1529 | new.freelist, new.counters, |
1526 | "lock and freeze")); | 1530 | "lock and freeze")); |
1527 | 1531 | ||
1528 | remove_partial(n, page); | 1532 | remove_partial(n, page); |
@@ -1564,7 +1568,6 @@ static void *get_partial_node(struct kmem_cache *s, | |||
1564 | object = t; | 1568 | object = t; |
1565 | available = page->objects - page->inuse; | 1569 | available = page->objects - page->inuse; |
1566 | } else { | 1570 | } else { |
1567 | page->freelist = t; | ||
1568 | available = put_cpu_partial(s, page, 0); | 1571 | available = put_cpu_partial(s, page, 0); |
1569 | stat(s, CPU_PARTIAL_NODE); | 1572 | stat(s, CPU_PARTIAL_NODE); |
1570 | } | 1573 | } |