diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-02-07 20:47:41 -0500 |
---|---|---|
committer | Christoph Lameter <christoph@stapp.engr.sgi.com> | 2008-02-07 20:47:41 -0500 |
commit | 8ff12cfc009a2a38d87fa7058226fe197bb2696f (patch) | |
tree | 1358ed247d3c897d8790342a978dd5078354a207 /lib | |
parent | 1f84260c8ce3b1ce26d4c1d6dedc2f33a3a29c0c (diff) |
SLUB: Support for performance statistics
The statistics provided here allow the monitoring of allocator behavior but
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure. The per cpu structure may be extended by the
statistics to grow larger than one cacheline which will increase the cache
footprint of SLUB.
There is a compile option to enable/disable the inclusion of the runtime
statistics and its off by default.
The slabinfo tool is enhanced to support these statistics via two options:
-D Switches the line of information displayed for a slab from size
mode to activity mode.
-A Sorts the slabs displayed by activity. This allows the display of
the slabs most important to the performance of a certain load.
-r Report option will report detailed statistics on
Example (tbench load):
slabinfo -AD ->Shows the most active slabs
Name Objects Alloc Free %Fast
skbuff_fclone_cache 33 111953835 111953835 99 99
:0000192 2666 5283688 5281047 99 99
:0001024 849 5247230 5246389 83 83
vm_area_struct 1349 119642 118355 91 22
:0004096 15 66753 66751 98 98
:0000064 2067 25297 23383 98 78
dentry 10259 28635 18464 91 45
:0000080 11004 18950 8089 98 98
:0000096 1703 12358 10784 99 98
:0000128 762 10582 9875 94 18
:0000512 184 9807 9647 95 81
:0002048 479 9669 9195 83 65
anon_vma 777 9461 9002 99 71
kmalloc-8 6492 9981 5624 99 97
:0000768 258 7174 6931 58 15
So the skbuff_fclone_cache is of highest importance for the tbench load.
Pretty high load on the 192 sized slab. Look for the aliases
slabinfo -a | grep 000192
:0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP
request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili
Likely skbuff_head_cache.
Looking into the statistics of the skbuff_fclone_cache is possible through
slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned
.... Usual output ...
Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 111953360 111946981 99 99
Slowpath 1044 7423 0 0
Page Alloc 272 264 0 0
Add partial 25 325 0 0
Remove partial 86 264 0 0
RemoteObj/SlabFrozen 350 4832 0 0
Total 111954404 111954404
Flushes 49 Refill 0
Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%)
Looks good because the fastpath is overwhelmingly taken.
skbuff_head_cache:
Slab Perf Counter Alloc Free %Al %Fr
--------------------------------------------------
Fastpath 5297262 5259882 99 99
Slowpath 4477 39586 0 0
Page Alloc 937 824 0 0
Add partial 0 2515 0 0
Remove partial 1691 824 0 0
RemoteObj/SlabFrozen 2621 9684 0 0
Total 5301739 5299468
Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%)
Descriptions of the output:
Total: The total number of allocation and frees that occurred for a
slab
Fastpath: The number of allocations/frees that used the fastpath.
Slowpath: Other allocations
Page Alloc: Number of calls to the page allocator as a result of slowpath
processing
Add Partial: Number of slabs added to the partial list through free or
alloc (occurs during cpuslab flushes)
Remove Partial: Number of slabs removed from the partial list as a result of
allocations retrieving a partial slab or by a free freeing
the last object of a slab.
RemoteObj/Froz: How many times were remotely freed object encountered when a
slab was about to be deactivated. Frozen: How many times was
free able to skip list processing because the slab was in use
as the cpuslab of another processor.
Flushes: Number of times the cpuslab was flushed on request
(kmem_cache_shrink, may result from races in __slab_alloc)
Refill: Number of times we were able to refill the cpuslab from
remotely freed objects for the same slab.
Deactivate: Statistics how slabs were deactivated. Shows how they were
put onto the partial list.
In general fastpath is very good. Slowpath without partial list processing is
also desirable. Any touching of partial list uses node specific locks which
may potentially cause list lock contention.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/Kconfig.debug | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 0d385be682db..4f4008fc73e4 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug | |||
@@ -205,6 +205,19 @@ config SLUB_DEBUG_ON | |||
205 | off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying | 205 | off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying |
206 | "slub_debug=-". | 206 | "slub_debug=-". |
207 | 207 | ||
208 | config SLUB_STATS | ||
209 | default n | ||
210 | bool "Enable SLUB performance statistics" | ||
211 | depends on SLUB | ||
212 | help | ||
213 | SLUB statistics are useful to debug SLUBs allocation behavior in | ||
214 | order find ways to optimize the allocator. This should never be | ||
215 | enabled for production use since keeping statistics slows down | ||
216 | the allocator by a few percentage points. The slabinfo command | ||
217 | supports the determination of the most active slabs to figure | ||
218 | out which slabs are relevant to a particular load. | ||
219 | Try running: slabinfo -DA | ||
220 | |||
208 | config DEBUG_PREEMPT | 221 | config DEBUG_PREEMPT |
209 | bool "Debug preemptible kernel" | 222 | bool "Debug preemptible kernel" |
210 | depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64) | 223 | depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64) |