aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/Documentation
diff options
context:
space:
mode:
authorAndrea Bastoni <bastoni@cs.unc.edu>2010-05-30 19:16:45 -0400
committerAndrea Bastoni <bastoni@cs.unc.edu>2010-05-30 19:16:45 -0400
commitada47b5fe13d89735805b566185f4885f5a3f750 (patch)
tree644b88f8a71896307d71438e9b3af49126ffb22b /tools/perf/Documentation
parent43e98717ad40a4ae64545b5ba047c7b86aa44f4f (diff)
parent3280f21d43ee541f97f8cda5792150d2dbec20d5 (diff)
Merge branch 'wip-2.6.34' into old-private-masterarchived-private-master
Diffstat (limited to 'tools/perf/Documentation')
-rw-r--r--tools/perf/Documentation/Makefile4
-rw-r--r--tools/perf/Documentation/perf-archive.txt22
-rw-r--r--tools/perf/Documentation/perf-bench.txt120
-rw-r--r--tools/perf/Documentation/perf-buildid-cache.txt33
-rw-r--r--tools/perf/Documentation/perf-buildid-list.txt34
-rw-r--r--tools/perf/Documentation/perf-diff.txt55
-rw-r--r--tools/perf/Documentation/perf-kmem.txt47
-rw-r--r--tools/perf/Documentation/perf-lock.txt29
-rw-r--r--tools/perf/Documentation/perf-probe.txt129
-rw-r--r--tools/perf/Documentation/perf-record.txt16
-rw-r--r--tools/perf/Documentation/perf-report.txt12
-rw-r--r--tools/perf/Documentation/perf-timechart.txt5
-rw-r--r--tools/perf/Documentation/perf-top.txt2
-rw-r--r--tools/perf/Documentation/perf-trace-perl.txt219
-rw-r--r--tools/perf/Documentation/perf-trace-python.txt625
-rw-r--r--tools/perf/Documentation/perf-trace.txt49
-rw-r--r--tools/perf/Documentation/perf.txt2
17 files changed, 1389 insertions, 14 deletions
diff --git a/tools/perf/Documentation/Makefile b/tools/perf/Documentation/Makefile
index bdd3b7ecad0a..bd498d496952 100644
--- a/tools/perf/Documentation/Makefile
+++ b/tools/perf/Documentation/Makefile
@@ -24,7 +24,10 @@ DOC_MAN1=$(patsubst %.txt,%.1,$(MAN1_TXT))
24DOC_MAN5=$(patsubst %.txt,%.5,$(MAN5_TXT)) 24DOC_MAN5=$(patsubst %.txt,%.5,$(MAN5_TXT))
25DOC_MAN7=$(patsubst %.txt,%.7,$(MAN7_TXT)) 25DOC_MAN7=$(patsubst %.txt,%.7,$(MAN7_TXT))
26 26
27# Make the path relative to DESTDIR, not prefix
28ifndef DESTDIR
27prefix?=$(HOME) 29prefix?=$(HOME)
30endif
28bindir?=$(prefix)/bin 31bindir?=$(prefix)/bin
29htmldir?=$(prefix)/share/doc/perf-doc 32htmldir?=$(prefix)/share/doc/perf-doc
30pdfdir?=$(prefix)/share/doc/perf-doc 33pdfdir?=$(prefix)/share/doc/perf-doc
@@ -32,7 +35,6 @@ mandir?=$(prefix)/share/man
32man1dir=$(mandir)/man1 35man1dir=$(mandir)/man1
33man5dir=$(mandir)/man5 36man5dir=$(mandir)/man5
34man7dir=$(mandir)/man7 37man7dir=$(mandir)/man7
35# DESTDIR=
36 38
37ASCIIDOC=asciidoc 39ASCIIDOC=asciidoc
38ASCIIDOC_EXTRA = --unsafe 40ASCIIDOC_EXTRA = --unsafe
diff --git a/tools/perf/Documentation/perf-archive.txt b/tools/perf/Documentation/perf-archive.txt
new file mode 100644
index 000000000000..fae174dc7d01
--- /dev/null
+++ b/tools/perf/Documentation/perf-archive.txt
@@ -0,0 +1,22 @@
1perf-archive(1)
2===============
3
4NAME
5----
6perf-archive - Create archive with object files with build-ids found in perf.data file
7
8SYNOPSIS
9--------
10[verse]
11'perf archive' [file]
12
13DESCRIPTION
14-----------
15This command runs runs perf-buildid-list --with-hits, and collects the files
16with the buildids found so that analisys of perf.data contents can be possible
17on another machine.
18
19
20SEE ALSO
21--------
22linkperf:perf-record[1], linkperf:perf-buildid-list[1], linkperf:perf-report[1]
diff --git a/tools/perf/Documentation/perf-bench.txt b/tools/perf/Documentation/perf-bench.txt
new file mode 100644
index 000000000000..ae525ac5a2ce
--- /dev/null
+++ b/tools/perf/Documentation/perf-bench.txt
@@ -0,0 +1,120 @@
1perf-bench(1)
2============
3
4NAME
5----
6perf-bench - General framework for benchmark suites
7
8SYNOPSIS
9--------
10[verse]
11'perf bench' [<common options>] <subsystem> <suite> [<options>]
12
13DESCRIPTION
14-----------
15This 'perf bench' command is general framework for benchmark suites.
16
17COMMON OPTIONS
18--------------
19-f::
20--format=::
21Specify format style.
22Current available format styles are,
23
24'default'::
25Default style. This is mainly for human reading.
26---------------------
27% perf bench sched pipe # with no style specify
28(executing 1000000 pipe operations between two tasks)
29 Total time:5.855 sec
30 5.855061 usecs/op
31 170792 ops/sec
32---------------------
33
34'simple'::
35This simple style is friendly for automated
36processing by scripts.
37---------------------
38% perf bench --format=simple sched pipe # specified simple
395.988
40---------------------
41
42SUBSYSTEM
43---------
44
45'sched'::
46 Scheduler and IPC mechanisms.
47
48SUITES FOR 'sched'
49~~~~~~~~~~~~~~~~~~
50*messaging*::
51Suite for evaluating performance of scheduler and IPC mechanisms.
52Based on hackbench by Rusty Russell.
53
54Options of *pipe*
55^^^^^^^^^^^^^^^^^
56-p::
57--pipe::
58Use pipe() instead of socketpair()
59
60-t::
61--thread::
62Be multi thread instead of multi process
63
64-g::
65--group=::
66Specify number of groups
67
68-l::
69--loop=::
70Specify number of loops
71
72Example of *messaging*
73^^^^^^^^^^^^^^^^^^^^^^
74
75---------------------
76% perf bench sched messaging # run with default
77options (20 sender and receiver processes per group)
78(10 groups == 400 processes run)
79
80 Total time:0.308 sec
81
82% perf bench sched messaging -t -g 20 # be multi-thread,with 20 groups
83(20 sender and receiver threads per group)
84(20 groups == 800 threads run)
85
86 Total time:0.582 sec
87---------------------
88
89*pipe*::
90Suite for pipe() system call.
91Based on pipe-test-1m.c by Ingo Molnar.
92
93Options of *pipe*
94^^^^^^^^^^^^^^^^^
95-l::
96--loop=::
97Specify number of loops.
98
99Example of *pipe*
100^^^^^^^^^^^^^^^^^
101
102---------------------
103% perf bench sched pipe
104(executing 1000000 pipe operations between two tasks)
105
106 Total time:8.091 sec
107 8.091833 usecs/op
108 123581 ops/sec
109
110% perf bench sched pipe -l 1000 # loop 1000
111(executing 1000 pipe operations between two tasks)
112
113 Total time:0.016 sec
114 16.948000 usecs/op
115 59004 ops/sec
116---------------------
117
118SEE ALSO
119--------
120linkperf:perf[1]
diff --git a/tools/perf/Documentation/perf-buildid-cache.txt b/tools/perf/Documentation/perf-buildid-cache.txt
new file mode 100644
index 000000000000..88bc3b519746
--- /dev/null
+++ b/tools/perf/Documentation/perf-buildid-cache.txt
@@ -0,0 +1,33 @@
1perf-buildid-cache(1)
2=====================
3
4NAME
5----
6perf-buildid-cache - Manage build-id cache.
7
8SYNOPSIS
9--------
10[verse]
11'perf buildid-list <options>'
12
13DESCRIPTION
14-----------
15This command manages the build-id cache. It can add and remove files to the
16cache. In the future it should as well purge older entries, set upper limits
17for the space used by the cache, etc.
18
19OPTIONS
20-------
21-a::
22--add=::
23 Add specified file to the cache.
24-r::
25--remove=::
26 Remove specified file to the cache.
27-v::
28--verbose::
29 Be more verbose.
30
31SEE ALSO
32--------
33linkperf:perf-record[1], linkperf:perf-report[1]
diff --git a/tools/perf/Documentation/perf-buildid-list.txt b/tools/perf/Documentation/perf-buildid-list.txt
new file mode 100644
index 000000000000..01b642c0bf8f
--- /dev/null
+++ b/tools/perf/Documentation/perf-buildid-list.txt
@@ -0,0 +1,34 @@
1perf-buildid-list(1)
2====================
3
4NAME
5----
6perf-buildid-list - List the buildids in a perf.data file
7
8SYNOPSIS
9--------
10[verse]
11'perf buildid-list <options>'
12
13DESCRIPTION
14-----------
15This command displays the buildids found in a perf.data file, so that other
16tools can be used to fetch packages with matching symbol tables for use by
17perf report.
18
19OPTIONS
20-------
21-i::
22--input=::
23 Input file name. (default: perf.data)
24-f::
25--force::
26 Don't do ownership validation.
27-v::
28--verbose::
29 Be more verbose.
30
31SEE ALSO
32--------
33linkperf:perf-record[1], linkperf:perf-top[1],
34linkperf:perf-report[1]
diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
new file mode 100644
index 000000000000..8974e208cba6
--- /dev/null
+++ b/tools/perf/Documentation/perf-diff.txt
@@ -0,0 +1,55 @@
1perf-diff(1)
2==============
3
4NAME
5----
6perf-diff - Read two perf.data files and display the differential profile
7
8SYNOPSIS
9--------
10[verse]
11'perf diff' [oldfile] [newfile]
12
13DESCRIPTION
14-----------
15This command displays the performance difference amongst two perf.data files
16captured via perf record.
17
18If no parameters are passed it will assume perf.data.old and perf.data.
19
20OPTIONS
21-------
22-d::
23--dsos=::
24 Only consider symbols in these dsos. CSV that understands
25 file://filename entries.
26
27-C::
28--comms=::
29 Only consider symbols in these comms. CSV that understands
30 file://filename entries.
31
32-S::
33--symbols=::
34 Only consider these symbols. CSV that understands
35 file://filename entries.
36
37-s::
38--sort=::
39 Sort by key(s): pid, comm, dso, symbol.
40
41-t::
42--field-separator=::
43
44 Use a special separator character and don't pad with spaces, replacing
45 all occurances of this separator in symbol names (and other output)
46 with a '.' character, that thus it's the only non valid separator.
47
48-v::
49--verbose::
50 Be verbose, for instance, show the raw counts in addition to the
51 diff.
52
53SEE ALSO
54--------
55linkperf:perf-record[1]
diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt
new file mode 100644
index 000000000000..eac4d852e7cd
--- /dev/null
+++ b/tools/perf/Documentation/perf-kmem.txt
@@ -0,0 +1,47 @@
1perf-kmem(1)
2==============
3
4NAME
5----
6perf-kmem - Tool to trace/measure kernel memory(slab) properties
7
8SYNOPSIS
9--------
10[verse]
11'perf kmem' {record|stat} [<options>]
12
13DESCRIPTION
14-----------
15There are two variants of perf kmem:
16
17 'perf kmem record <command>' to record the kmem events
18 of an arbitrary workload.
19
20 'perf kmem stat' to report kernel memory statistics.
21
22OPTIONS
23-------
24-i <file>::
25--input=<file>::
26 Select the input file (default: perf.data)
27
28--caller::
29 Show per-callsite statistics
30
31--alloc::
32 Show per-allocation statistics
33
34-s <key[,key2...]>::
35--sort=<key[,key2...]>::
36 Sort the output (default: frag,hit,bytes)
37
38-l <num>::
39--line=<num>::
40 Print n lines only
41
42--raw-ip::
43 Print raw ip instead of symbol
44
45SEE ALSO
46--------
47linkperf:perf-record[1]
diff --git a/tools/perf/Documentation/perf-lock.txt b/tools/perf/Documentation/perf-lock.txt
new file mode 100644
index 000000000000..b317102138c8
--- /dev/null
+++ b/tools/perf/Documentation/perf-lock.txt
@@ -0,0 +1,29 @@
1perf-lock(1)
2============
3
4NAME
5----
6perf-lock - Analyze lock events
7
8SYNOPSIS
9--------
10[verse]
11'perf lock' {record|report|trace}
12
13DESCRIPTION
14-----------
15You can analyze various lock behaviours
16and statistics with this 'perf lock' command.
17
18 'perf lock record <command>' records lock events
19 between start and end <command>. And this command
20 produces the file "perf.data" which contains tracing
21 results of lock events.
22
23 'perf lock trace' shows raw lock events.
24
25 'perf lock report' reports statistical data.
26
27SEE ALSO
28--------
29linkperf:perf[1]
diff --git a/tools/perf/Documentation/perf-probe.txt b/tools/perf/Documentation/perf-probe.txt
new file mode 100644
index 000000000000..34202b1be0bb
--- /dev/null
+++ b/tools/perf/Documentation/perf-probe.txt
@@ -0,0 +1,129 @@
1perf-probe(1)
2=============
3
4NAME
5----
6perf-probe - Define new dynamic tracepoints
7
8SYNOPSIS
9--------
10[verse]
11'perf probe' [options] --add='PROBE' [...]
12or
13'perf probe' [options] PROBE
14or
15'perf probe' [options] --del='[GROUP:]EVENT' [...]
16or
17'perf probe' --list
18or
19'perf probe' --line='FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]'
20
21DESCRIPTION
22-----------
23This command defines dynamic tracepoint events, by symbol and registers
24without debuginfo, or by C expressions (C line numbers, C function names,
25and C local variables) with debuginfo.
26
27
28OPTIONS
29-------
30-k::
31--vmlinux=PATH::
32 Specify vmlinux path which has debuginfo (Dwarf binary).
33
34-v::
35--verbose::
36 Be more verbose (show parsed arguments, etc).
37
38-a::
39--add=::
40 Define a probe event (see PROBE SYNTAX for detail).
41
42-d::
43--del=::
44 Delete probe events. This accepts glob wildcards('*', '?') and character
45 classes(e.g. [a-z], [!A-Z]).
46
47-l::
48--list::
49 List up current probe events.
50
51-L::
52--line=::
53 Show source code lines which can be probed. This needs an argument
54 which specifies a range of the source code. (see LINE SYNTAX for detail)
55
56-f::
57--force::
58 Forcibly add events with existing name.
59
60PROBE SYNTAX
61------------
62Probe points are defined by following syntax.
63
64 1) Define event based on function name
65 [EVENT=]FUNC[@SRC][:RLN|+OFFS|%return|;PTN] [ARG ...]
66
67 2) Define event based on source file with line number
68 [EVENT=]SRC:ALN [ARG ...]
69
70 3) Define event based on source file with lazy pattern
71 [EVENT=]SRC;PTN [ARG ...]
72
73
74'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'.
75'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, ':RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. And ';PTN' means lazy matching pattern (see LAZY MATCHING). Note that ';PTN' must be the end of the probe point definition. In addition, '@SRC' specifies a source file which has that function.
76It is also possible to specify a probe point by the source line number or lazy matching by using 'SRC:ALN' or 'SRC;PTN' syntax, where 'SRC' is the source file path, ':ALN' is the line number and ';PTN' is the lazy matching pattern.
77'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc).
78
79LINE SYNTAX
80-----------
81Line range is descripted by following syntax.
82
83 "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]"
84
85FUNC specifies the function name of showing lines. 'RLN' is the start line
86number from function entry line, and 'RLN2' is the end line number. As same as
87probe syntax, 'SRC' means the source file path, 'ALN' is start line number,
88and 'ALN2' is end line number in the file. It is also possible to specify how
89many lines to show by using 'NUM'.
90So, "source.c:100-120" shows lines between 100th to l20th in source.c file. And "func:10+20" shows 20 lines from 10th line of func function.
91
92LAZY MATCHING
93-------------
94 The lazy line matching is similar to glob matching but ignoring spaces in both of pattern and target. So this accepts wildcards('*', '?') and character classes(e.g. [a-z], [!A-Z]).
95
96e.g.
97 'a=*' can matches 'a=b', 'a = b', 'a == b' and so on.
98
99This provides some sort of flexibility and robustness to probe point definitions against minor code changes. For example, actual 10th line of schedule() can be moved easily by modifying schedule(), but the same line matching 'rq=cpu_rq*' may still exist in the function.)
100
101
102EXAMPLES
103--------
104Display which lines in schedule() can be probed:
105
106 ./perf probe --line schedule
107
108Add a probe on schedule() function 12th line with recording cpu local variable:
109
110 ./perf probe schedule:12 cpu
111 or
112 ./perf probe --add='schedule:12 cpu'
113
114 this will add one or more probes which has the name start with "schedule".
115
116 Add probes on lines in schedule() function which calls update_rq_clock().
117
118 ./perf probe 'schedule;update_rq_clock*'
119 or
120 ./perf probe --add='schedule;update_rq_clock*'
121
122Delete all probes on schedule().
123
124 ./perf probe --del='schedule*'
125
126
127SEE ALSO
128--------
129linkperf:perf-trace[1], linkperf:perf-record[1]
diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 0ff23de9e453..fc46c0b40f6e 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -26,11 +26,19 @@ OPTIONS
26 26
27-e:: 27-e::
28--event=:: 28--event=::
29 Select the PMU event. Selection can be a symbolic event name 29 Select the PMU event. Selection can be:
30 (use 'perf list' to list all events) or a raw PMU
31 event (eventsel+umask) in the form of rNNN where NNN is a
32 hexadecimal event descriptor.
33 30
31 - a symbolic event name (use 'perf list' to list all events)
32
33 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
34 hexadecimal event descriptor.
35
36 - a hardware breakpoint event in the form of '\mem:addr[:access]'
37 where addr is the address in memory you want to break in.
38 Access is the memory access type (read, write, execute) it can
39 be passed as follows: '\mem:addr[:[r][w][x]]'.
40 If you want to profile read-write accesses in 0x1000, just set
41 'mem:0x1000:rw'.
34-a:: 42-a::
35 System-wide collection. 43 System-wide collection.
36 44
diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index 59f0b846cd71..abfabe9147a4 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -24,11 +24,11 @@ OPTIONS
24--dsos=:: 24--dsos=::
25 Only consider symbols in these dsos. CSV that understands 25 Only consider symbols in these dsos. CSV that understands
26 file://filename entries. 26 file://filename entries.
27-n 27-n::
28--show-nr-samples 28--show-nr-samples::
29 Show the number of samples for each symbol 29 Show the number of samples for each symbol
30-T 30-T::
31--threads 31--threads::
32 Show per-thread event counters 32 Show per-thread event counters
33-C:: 33-C::
34--comms=:: 34--comms=::
@@ -39,6 +39,10 @@ OPTIONS
39 Only consider these symbols. CSV that understands 39 Only consider these symbols. CSV that understands
40 file://filename entries. 40 file://filename entries.
41 41
42-s::
43--sort=::
44 Sort by key(s): pid, comm, dso, symbol, parent.
45
42-w:: 46-w::
43--field-width=:: 47--field-width=::
44 Force each column width to the provided list, for large terminal 48 Force each column width to the provided list, for large terminal
diff --git a/tools/perf/Documentation/perf-timechart.txt b/tools/perf/Documentation/perf-timechart.txt
index a7910099d6fd..4b1788355eca 100644
--- a/tools/perf/Documentation/perf-timechart.txt
+++ b/tools/perf/Documentation/perf-timechart.txt
@@ -31,9 +31,12 @@ OPTIONS
31-w:: 31-w::
32--width=:: 32--width=::
33 Select the width of the SVG file (default: 1000) 33 Select the width of the SVG file (default: 1000)
34-p:: 34-P::
35--power-only:: 35--power-only::
36 Only output the CPU power section of the diagram 36 Only output the CPU power section of the diagram
37-p::
38--process::
39 Select the processes to display, by name or PID
37 40
38 41
39SEE ALSO 42SEE ALSO
diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
index 4a7d558dc309..785b9fc32a46 100644
--- a/tools/perf/Documentation/perf-top.txt
+++ b/tools/perf/Documentation/perf-top.txt
@@ -74,7 +74,7 @@ OPTIONS
74 74
75-s <symbol>:: 75-s <symbol>::
76--sym-annotate=<symbol>:: 76--sym-annotate=<symbol>::
77 Annotate this symbol. Requires -k option. 77 Annotate this symbol.
78 78
79-v:: 79-v::
80--verbose:: 80--verbose::
diff --git a/tools/perf/Documentation/perf-trace-perl.txt b/tools/perf/Documentation/perf-trace-perl.txt
new file mode 100644
index 000000000000..d729cee8d987
--- /dev/null
+++ b/tools/perf/Documentation/perf-trace-perl.txt
@@ -0,0 +1,219 @@
1perf-trace-perl(1)
2==================
3
4NAME
5----
6perf-trace-perl - Process trace data with a Perl script
7
8SYNOPSIS
9--------
10[verse]
11'perf trace' [-s [Perl]:script[.pl] ]
12
13DESCRIPTION
14-----------
15
16This perf trace option is used to process perf trace data using perf's
17built-in Perl interpreter. It reads and processes the input file and
18displays the results of the trace analysis implemented in the given
19Perl script, if any.
20
21STARTER SCRIPTS
22---------------
23
24You can avoid reading the rest of this document by running 'perf trace
25-g perl' in the same directory as an existing perf.data trace file.
26That will generate a starter script containing a handler for each of
27the event types in the trace file; it simply prints every available
28field for each event in the trace file.
29
30You can also look at the existing scripts in
31~/libexec/perf-core/scripts/perl for typical examples showing how to
32do basic things like aggregate event data, print results, etc. Also,
33the check-perf-trace.pl script, while not interesting for its results,
34attempts to exercise all of the main scripting features.
35
36EVENT HANDLERS
37--------------
38
39When perf trace is invoked using a trace script, a user-defined
40'handler function' is called for each event in the trace. If there's
41no handler function defined for a given event type, the event is
42ignored (or passed to a 'trace_handled' function, see below) and the
43next event is processed.
44
45Most of the event's field values are passed as arguments to the
46handler function; some of the less common ones aren't - those are
47available as calls back into the perf executable (see below).
48
49As an example, the following perf record command can be used to record
50all sched_wakeup events in the system:
51
52 # perf record -c 1 -f -a -M -R -e sched:sched_wakeup
53
54Traces meant to be processed using a script should be recorded with
55the above options: -c 1 says to sample every event, -a to enable
56system-wide collection, -M to multiplex the output, and -R to collect
57raw samples.
58
59The format file for the sched_wakep event defines the following fields
60(see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format):
61
62----
63 format:
64 field:unsigned short common_type;
65 field:unsigned char common_flags;
66 field:unsigned char common_preempt_count;
67 field:int common_pid;
68 field:int common_lock_depth;
69
70 field:char comm[TASK_COMM_LEN];
71 field:pid_t pid;
72 field:int prio;
73 field:int success;
74 field:int target_cpu;
75----
76
77The handler function for this event would be defined as:
78
79----
80sub sched::sched_wakeup
81{
82 my ($event_name, $context, $common_cpu, $common_secs,
83 $common_nsecs, $common_pid, $common_comm,
84 $comm, $pid, $prio, $success, $target_cpu) = @_;
85}
86----
87
88The handler function takes the form subsystem::event_name.
89
90The $common_* arguments in the handler's argument list are the set of
91arguments passed to all event handlers; some of the fields correspond
92to the common_* fields in the format file, but some are synthesized,
93and some of the common_* fields aren't common enough to to be passed
94to every event as arguments but are available as library functions.
95
96Here's a brief description of each of the invariant event args:
97
98 $event_name the name of the event as text
99 $context an opaque 'cookie' used in calls back into perf
100 $common_cpu the cpu the event occurred on
101 $common_secs the secs portion of the event timestamp
102 $common_nsecs the nsecs portion of the event timestamp
103 $common_pid the pid of the current task
104 $common_comm the name of the current process
105
106All of the remaining fields in the event's format file have
107counterparts as handler function arguments of the same name, as can be
108seen in the example above.
109
110The above provides the basics needed to directly access every field of
111every event in a trace, which covers 90% of what you need to know to
112write a useful trace script. The sections below cover the rest.
113
114SCRIPT LAYOUT
115-------------
116
117Every perf trace Perl script should start by setting up a Perl module
118search path and 'use'ing a few support modules (see module
119descriptions below):
120
121----
122 use lib "$ENV{'PERF_EXEC_PATH'}/scripts/perl/Perf-Trace-Util/lib";
123 use lib "./Perf-Trace-Util/lib";
124 use Perf::Trace::Core;
125 use Perf::Trace::Context;
126 use Perf::Trace::Util;
127----
128
129The rest of the script can contain handler functions and support
130functions in any order.
131
132Aside from the event handler functions discussed above, every script
133can implement a set of optional functions:
134
135*trace_begin*, if defined, is called before any event is processed and
136gives scripts a chance to do setup tasks:
137
138----
139 sub trace_begin
140 {
141 }
142----
143
144*trace_end*, if defined, is called after all events have been
145 processed and gives scripts a chance to do end-of-script tasks, such
146 as display results:
147
148----
149sub trace_end
150{
151}
152----
153
154*trace_unhandled*, if defined, is called after for any event that
155 doesn't have a handler explicitly defined for it. The standard set
156 of common arguments are passed into it:
157
158----
159sub trace_unhandled
160{
161 my ($event_name, $context, $common_cpu, $common_secs,
162 $common_nsecs, $common_pid, $common_comm) = @_;
163}
164----
165
166The remaining sections provide descriptions of each of the available
167built-in perf trace Perl modules and their associated functions.
168
169AVAILABLE MODULES AND FUNCTIONS
170-------------------------------
171
172The following sections describe the functions and variables available
173via the various Perf::Trace::* Perl modules. To use the functions and
174variables from the given module, add the corresponding 'use
175Perf::Trace::XXX' line to your perf trace script.
176
177Perf::Trace::Core Module
178~~~~~~~~~~~~~~~~~~~~~~~~
179
180These functions provide some essential functions to user scripts.
181
182The *flag_str* and *symbol_str* functions provide human-readable
183strings for flag and symbolic fields. These correspond to the strings
184and values parsed from the 'print fmt' fields of the event format
185files:
186
187 flag_str($event_name, $field_name, $field_value) - returns the string represention corresponding to $field_value for the flag field $field_name of event $event_name
188 symbol_str($event_name, $field_name, $field_value) - returns the string represention corresponding to $field_value for the symbolic field $field_name of event $event_name
189
190Perf::Trace::Context Module
191~~~~~~~~~~~~~~~~~~~~~~~~~~~
192
193Some of the 'common' fields in the event format file aren't all that
194common, but need to be made accessible to user scripts nonetheless.
195
196Perf::Trace::Context defines a set of functions that can be used to
197access this data in the context of the current event. Each of these
198functions expects a $context variable, which is the same as the
199$context variable passed into every event handler as the second
200argument.
201
202 common_pc($context) - returns common_preempt count for the current event
203 common_flags($context) - returns common_flags for the current event
204 common_lock_depth($context) - returns common_lock_depth for the current event
205
206Perf::Trace::Util Module
207~~~~~~~~~~~~~~~~~~~~~~~~
208
209Various utility functions for use with perf trace:
210
211 nsecs($secs, $nsecs) - returns total nsecs given secs/nsecs pair
212 nsecs_secs($nsecs) - returns whole secs portion given nsecs
213 nsecs_nsecs($nsecs) - returns nsecs remainder given nsecs
214 nsecs_str($nsecs) - returns printable string in the form secs.nsecs
215 avg($total, $n) - returns average given a sum and a total number of values
216
217SEE ALSO
218--------
219linkperf:perf-trace[1]
diff --git a/tools/perf/Documentation/perf-trace-python.txt b/tools/perf/Documentation/perf-trace-python.txt
new file mode 100644
index 000000000000..a241aca77184
--- /dev/null
+++ b/tools/perf/Documentation/perf-trace-python.txt
@@ -0,0 +1,625 @@
1perf-trace-python(1)
2==================
3
4NAME
5----
6perf-trace-python - Process trace data with a Python script
7
8SYNOPSIS
9--------
10[verse]
11'perf trace' [-s [Python]:script[.py] ]
12
13DESCRIPTION
14-----------
15
16This perf trace option is used to process perf trace data using perf's
17built-in Python interpreter. It reads and processes the input file and
18displays the results of the trace analysis implemented in the given
19Python script, if any.
20
21A QUICK EXAMPLE
22---------------
23
24This section shows the process, start to finish, of creating a working
25Python script that aggregates and extracts useful information from a
26raw perf trace stream. You can avoid reading the rest of this
27document if an example is enough for you; the rest of the document
28provides more details on each step and lists the library functions
29available to script writers.
30
31This example actually details the steps that were used to create the
32'syscall-counts' script you see when you list the available perf trace
33scripts via 'perf trace -l'. As such, this script also shows how to
34integrate your script into the list of general-purpose 'perf trace'
35scripts listed by that command.
36
37The syscall-counts script is a simple script, but demonstrates all the
38basic ideas necessary to create a useful script. Here's an example
39of its output (syscall names are not yet supported, they will appear
40as numbers):
41
42----
43syscall events:
44
45event count
46---------------------------------------- -----------
47sys_write 455067
48sys_getdents 4072
49sys_close 3037
50sys_swapoff 1769
51sys_read 923
52sys_sched_setparam 826
53sys_open 331
54sys_newfstat 326
55sys_mmap 217
56sys_munmap 216
57sys_futex 141
58sys_select 102
59sys_poll 84
60sys_setitimer 12
61sys_writev 8
6215 8
63sys_lseek 7
64sys_rt_sigprocmask 6
65sys_wait4 3
66sys_ioctl 3
67sys_set_robust_list 1
68sys_exit 1
6956 1
70sys_access 1
71----
72
73Basically our task is to keep a per-syscall tally that gets updated
74every time a system call occurs in the system. Our script will do
75that, but first we need to record the data that will be processed by
76that script. Theoretically, there are a couple of ways we could do
77that:
78
79- we could enable every event under the tracing/events/syscalls
80 directory, but this is over 600 syscalls, well beyond the number
81 allowable by perf. These individual syscall events will however be
82 useful if we want to later use the guidance we get from the
83 general-purpose scripts to drill down and get more detail about
84 individual syscalls of interest.
85
86- we can enable the sys_enter and/or sys_exit syscalls found under
87 tracing/events/raw_syscalls. These are called for all syscalls; the
88 'id' field can be used to distinguish between individual syscall
89 numbers.
90
91For this script, we only need to know that a syscall was entered; we
92don't care how it exited, so we'll use 'perf record' to record only
93the sys_enter events:
94
95----
96# perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter
97
98^C[ perf record: Woken up 1 times to write data ]
99[ perf record: Captured and wrote 56.545 MB perf.data (~2470503 samples) ]
100----
101
102The options basically say to collect data for every syscall event
103system-wide and multiplex the per-cpu output into a single stream.
104That single stream will be recorded in a file in the current directory
105called perf.data.
106
107Once we have a perf.data file containing our data, we can use the -g
108'perf trace' option to generate a Python script that will contain a
109callback handler for each event type found in the perf.data trace
110stream (for more details, see the STARTER SCRIPTS section).
111
112----
113# perf trace -g python
114generated Python script: perf-trace.py
115
116The output file created also in the current directory is named
117perf-trace.py. Here's the file in its entirety:
118
119# perf trace event handlers, generated by perf trace -g python
120# Licensed under the terms of the GNU GPL License version 2
121
122# The common_* event handler fields are the most useful fields common to
123# all events. They don't necessarily correspond to the 'common_*' fields
124# in the format files. Those fields not available as handler params can
125# be retrieved using Python functions of the form common_*(context).
126# See the perf-trace-python Documentation for the list of available functions.
127
128import os
129import sys
130
131sys.path.append(os.environ['PERF_EXEC_PATH'] + \
132 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
133
134from perf_trace_context import *
135from Core import *
136
137def trace_begin():
138 print "in trace_begin"
139
140def trace_end():
141 print "in trace_end"
142
143def raw_syscalls__sys_enter(event_name, context, common_cpu,
144 common_secs, common_nsecs, common_pid, common_comm,
145 id, args):
146 print_header(event_name, common_cpu, common_secs, common_nsecs,
147 common_pid, common_comm)
148
149 print "id=%d, args=%s\n" % \
150 (id, args),
151
152def trace_unhandled(event_name, context, common_cpu, common_secs, common_nsecs,
153 common_pid, common_comm):
154 print_header(event_name, common_cpu, common_secs, common_nsecs,
155 common_pid, common_comm)
156
157def print_header(event_name, cpu, secs, nsecs, pid, comm):
158 print "%-20s %5u %05u.%09u %8u %-20s " % \
159 (event_name, cpu, secs, nsecs, pid, comm),
160----
161
162At the top is a comment block followed by some import statements and a
163path append which every perf trace script should include.
164
165Following that are a couple generated functions, trace_begin() and
166trace_end(), which are called at the beginning and the end of the
167script respectively (for more details, see the SCRIPT_LAYOUT section
168below).
169
170Following those are the 'event handler' functions generated one for
171every event in the 'perf record' output. The handler functions take
172the form subsystem__event_name, and contain named parameters, one for
173each field in the event; in this case, there's only one event,
174raw_syscalls__sys_enter(). (see the EVENT HANDLERS section below for
175more info on event handlers).
176
177The final couple of functions are, like the begin and end functions,
178generated for every script. The first, trace_unhandled(), is called
179every time the script finds an event in the perf.data file that
180doesn't correspond to any event handler in the script. This could
181mean either that the record step recorded event types that it wasn't
182really interested in, or the script was run against a trace file that
183doesn't correspond to the script.
184
185The script generated by -g option option simply prints a line for each
186event found in the trace stream i.e. it basically just dumps the event
187and its parameter values to stdout. The print_header() function is
188simply a utility function used for that purpose. Let's rename the
189script and run it to see the default output:
190
191----
192# mv perf-trace.py syscall-counts.py
193# perf trace -s syscall-counts.py
194
195raw_syscalls__sys_enter 1 00840.847582083 7506 perf id=1, args=
196raw_syscalls__sys_enter 1 00840.847595764 7506 perf id=1, args=
197raw_syscalls__sys_enter 1 00840.847620860 7506 perf id=1, args=
198raw_syscalls__sys_enter 1 00840.847710478 6533 npviewer.bin id=78, args=
199raw_syscalls__sys_enter 1 00840.847719204 6533 npviewer.bin id=142, args=
200raw_syscalls__sys_enter 1 00840.847755445 6533 npviewer.bin id=3, args=
201raw_syscalls__sys_enter 1 00840.847775601 6533 npviewer.bin id=3, args=
202raw_syscalls__sys_enter 1 00840.847781820 6533 npviewer.bin id=3, args=
203.
204.
205.
206----
207
208Of course, for this script, we're not interested in printing every
209trace event, but rather aggregating it in a useful way. So we'll get
210rid of everything to do with printing as well as the trace_begin() and
211trace_unhandled() functions, which we won't be using. That leaves us
212with this minimalistic skeleton:
213
214----
215import os
216import sys
217
218sys.path.append(os.environ['PERF_EXEC_PATH'] + \
219 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
220
221from perf_trace_context import *
222from Core import *
223
224def trace_end():
225 print "in trace_end"
226
227def raw_syscalls__sys_enter(event_name, context, common_cpu,
228 common_secs, common_nsecs, common_pid, common_comm,
229 id, args):
230----
231
232In trace_end(), we'll simply print the results, but first we need to
233generate some results to print. To do that we need to have our
234sys_enter() handler do the necessary tallying until all events have
235been counted. A hash table indexed by syscall id is a good way to
236store that information; every time the sys_enter() handler is called,
237we simply increment a count associated with that hash entry indexed by
238that syscall id:
239
240----
241 syscalls = autodict()
242
243 try:
244 syscalls[id] += 1
245 except TypeError:
246 syscalls[id] = 1
247----
248
249The syscalls 'autodict' object is a special kind of Python dictionary
250(implemented in Core.py) that implements Perl's 'autovivifying' hashes
251in Python i.e. with autovivifying hashes, you can assign nested hash
252values without having to go to the trouble of creating intermediate
253levels if they don't exist e.g syscalls[comm][pid][id] = 1 will create
254the intermediate hash levels and finally assign the value 1 to the
255hash entry for 'id' (because the value being assigned isn't a hash
256object itself, the initial value is assigned in the TypeError
257exception. Well, there may be a better way to do this in Python but
258that's what works for now).
259
260Putting that code into the raw_syscalls__sys_enter() handler, we
261effectively end up with a single-level dictionary keyed on syscall id
262and having the counts we've tallied as values.
263
264The print_syscall_totals() function iterates over the entries in the
265dictionary and displays a line for each entry containing the syscall
266name (the dictonary keys contain the syscall ids, which are passed to
267the Util function syscall_name(), which translates the raw syscall
268numbers to the corresponding syscall name strings). The output is
269displayed after all the events in the trace have been processed, by
270calling the print_syscall_totals() function from the trace_end()
271handler called at the end of script processing.
272
273The final script producing the output shown above is shown in its
274entirety below (syscall_name() helper is not yet available, you can
275only deal with id's for now):
276
277----
278import os
279import sys
280
281sys.path.append(os.environ['PERF_EXEC_PATH'] + \
282 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
283
284from perf_trace_context import *
285from Core import *
286from Util import *
287
288syscalls = autodict()
289
290def trace_end():
291 print_syscall_totals()
292
293def raw_syscalls__sys_enter(event_name, context, common_cpu,
294 common_secs, common_nsecs, common_pid, common_comm,
295 id, args):
296 try:
297 syscalls[id] += 1
298 except TypeError:
299 syscalls[id] = 1
300
301def print_syscall_totals():
302 if for_comm is not None:
303 print "\nsyscall events for %s:\n\n" % (for_comm),
304 else:
305 print "\nsyscall events:\n\n",
306
307 print "%-40s %10s\n" % ("event", "count"),
308 print "%-40s %10s\n" % ("----------------------------------------", \
309 "-----------"),
310
311 for id, val in sorted(syscalls.iteritems(), key = lambda(k, v): (v, k), \
312 reverse = True):
313 print "%-40s %10d\n" % (syscall_name(id), val),
314----
315
316The script can be run just as before:
317
318 # perf trace -s syscall-counts.py
319
320So those are the essential steps in writing and running a script. The
321process can be generalized to any tracepoint or set of tracepoints
322you're interested in - basically find the tracepoint(s) you're
323interested in by looking at the list of available events shown by
324'perf list' and/or look in /sys/kernel/debug/tracing events for
325detailed event and field info, record the corresponding trace data
326using 'perf record', passing it the list of interesting events,
327generate a skeleton script using 'perf trace -g python' and modify the
328code to aggregate and display it for your particular needs.
329
330After you've done that you may end up with a general-purpose script
331that you want to keep around and have available for future use. By
332writing a couple of very simple shell scripts and putting them in the
333right place, you can have your script listed alongside the other
334scripts listed by the 'perf trace -l' command e.g.:
335
336----
337root@tropicana:~# perf trace -l
338List of available trace scripts:
339 workqueue-stats workqueue stats (ins/exe/create/destroy)
340 wakeup-latency system-wide min/max/avg wakeup latency
341 rw-by-file <comm> r/w activity for a program, by file
342 rw-by-pid system-wide r/w activity
343----
344
345A nice side effect of doing this is that you also then capture the
346probably lengthy 'perf record' command needed to record the events for
347the script.
348
349To have the script appear as a 'built-in' script, you write two simple
350scripts, one for recording and one for 'reporting'.
351
352The 'record' script is a shell script with the same base name as your
353script, but with -record appended. The shell script should be put
354into the perf/scripts/python/bin directory in the kernel source tree.
355In that script, you write the 'perf record' command-line needed for
356your script:
357
358----
359# cat kernel-source/tools/perf/scripts/python/bin/syscall-counts-record
360
361#!/bin/bash
362perf record -c 1 -f -a -M -R -e raw_syscalls:sys_enter
363----
364
365The 'report' script is also a shell script with the same base name as
366your script, but with -report appended. It should also be located in
367the perf/scripts/python/bin directory. In that script, you write the
368'perf trace -s' command-line needed for running your script:
369
370----
371# cat kernel-source/tools/perf/scripts/python/bin/syscall-counts-report
372
373#!/bin/bash
374# description: system-wide syscall counts
375perf trace -s ~/libexec/perf-core/scripts/python/syscall-counts.py
376----
377
378Note that the location of the Python script given in the shell script
379is in the libexec/perf-core/scripts/python directory - this is where
380the script will be copied by 'make install' when you install perf.
381For the installation to install your script there, your script needs
382to be located in the perf/scripts/python directory in the kernel
383source tree:
384
385----
386# ls -al kernel-source/tools/perf/scripts/python
387
388root@tropicana:/home/trz/src/tip# ls -al tools/perf/scripts/python
389total 32
390drwxr-xr-x 4 trz trz 4096 2010-01-26 22:30 .
391drwxr-xr-x 4 trz trz 4096 2010-01-26 22:29 ..
392drwxr-xr-x 2 trz trz 4096 2010-01-26 22:29 bin
393-rw-r--r-- 1 trz trz 2548 2010-01-26 22:29 check-perf-trace.py
394drwxr-xr-x 3 trz trz 4096 2010-01-26 22:49 Perf-Trace-Util
395-rw-r--r-- 1 trz trz 1462 2010-01-26 22:30 syscall-counts.py
396----
397
398Once you've done that (don't forget to do a new 'make install',
399otherwise your script won't show up at run-time), 'perf trace -l'
400should show a new entry for your script:
401
402----
403root@tropicana:~# perf trace -l
404List of available trace scripts:
405 workqueue-stats workqueue stats (ins/exe/create/destroy)
406 wakeup-latency system-wide min/max/avg wakeup latency
407 rw-by-file <comm> r/w activity for a program, by file
408 rw-by-pid system-wide r/w activity
409 syscall-counts system-wide syscall counts
410----
411
412You can now perform the record step via 'perf trace record':
413
414 # perf trace record syscall-counts
415
416and display the output using 'perf trace report':
417
418 # perf trace report syscall-counts
419
420STARTER SCRIPTS
421---------------
422
423You can quickly get started writing a script for a particular set of
424trace data by generating a skeleton script using 'perf trace -g
425python' in the same directory as an existing perf.data trace file.
426That will generate a starter script containing a handler for each of
427the event types in the trace file; it simply prints every available
428field for each event in the trace file.
429
430You can also look at the existing scripts in
431~/libexec/perf-core/scripts/python for typical examples showing how to
432do basic things like aggregate event data, print results, etc. Also,
433the check-perf-trace.py script, while not interesting for its results,
434attempts to exercise all of the main scripting features.
435
436EVENT HANDLERS
437--------------
438
439When perf trace is invoked using a trace script, a user-defined
440'handler function' is called for each event in the trace. If there's
441no handler function defined for a given event type, the event is
442ignored (or passed to a 'trace_handled' function, see below) and the
443next event is processed.
444
445Most of the event's field values are passed as arguments to the
446handler function; some of the less common ones aren't - those are
447available as calls back into the perf executable (see below).
448
449As an example, the following perf record command can be used to record
450all sched_wakeup events in the system:
451
452 # perf record -c 1 -f -a -M -R -e sched:sched_wakeup
453
454Traces meant to be processed using a script should be recorded with
455the above options: -c 1 says to sample every event, -a to enable
456system-wide collection, -M to multiplex the output, and -R to collect
457raw samples.
458
459The format file for the sched_wakep event defines the following fields
460(see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format):
461
462----
463 format:
464 field:unsigned short common_type;
465 field:unsigned char common_flags;
466 field:unsigned char common_preempt_count;
467 field:int common_pid;
468 field:int common_lock_depth;
469
470 field:char comm[TASK_COMM_LEN];
471 field:pid_t pid;
472 field:int prio;
473 field:int success;
474 field:int target_cpu;
475----
476
477The handler function for this event would be defined as:
478
479----
480def sched__sched_wakeup(event_name, context, common_cpu, common_secs,
481 common_nsecs, common_pid, common_comm,
482 comm, pid, prio, success, target_cpu):
483 pass
484----
485
486The handler function takes the form subsystem__event_name.
487
488The common_* arguments in the handler's argument list are the set of
489arguments passed to all event handlers; some of the fields correspond
490to the common_* fields in the format file, but some are synthesized,
491and some of the common_* fields aren't common enough to to be passed
492to every event as arguments but are available as library functions.
493
494Here's a brief description of each of the invariant event args:
495
496 event_name the name of the event as text
497 context an opaque 'cookie' used in calls back into perf
498 common_cpu the cpu the event occurred on
499 common_secs the secs portion of the event timestamp
500 common_nsecs the nsecs portion of the event timestamp
501 common_pid the pid of the current task
502 common_comm the name of the current process
503
504All of the remaining fields in the event's format file have
505counterparts as handler function arguments of the same name, as can be
506seen in the example above.
507
508The above provides the basics needed to directly access every field of
509every event in a trace, which covers 90% of what you need to know to
510write a useful trace script. The sections below cover the rest.
511
512SCRIPT LAYOUT
513-------------
514
515Every perf trace Python script should start by setting up a Python
516module search path and 'import'ing a few support modules (see module
517descriptions below):
518
519----
520 import os
521 import sys
522
523 sys.path.append(os.environ['PERF_EXEC_PATH'] + \
524 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
525
526 from perf_trace_context import *
527 from Core import *
528----
529
530The rest of the script can contain handler functions and support
531functions in any order.
532
533Aside from the event handler functions discussed above, every script
534can implement a set of optional functions:
535
536*trace_begin*, if defined, is called before any event is processed and
537gives scripts a chance to do setup tasks:
538
539----
540def trace_begin:
541 pass
542----
543
544*trace_end*, if defined, is called after all events have been
545 processed and gives scripts a chance to do end-of-script tasks, such
546 as display results:
547
548----
549def trace_end:
550 pass
551----
552
553*trace_unhandled*, if defined, is called after for any event that
554 doesn't have a handler explicitly defined for it. The standard set
555 of common arguments are passed into it:
556
557----
558def trace_unhandled(event_name, context, common_cpu, common_secs,
559 common_nsecs, common_pid, common_comm):
560 pass
561----
562
563The remaining sections provide descriptions of each of the available
564built-in perf trace Python modules and their associated functions.
565
566AVAILABLE MODULES AND FUNCTIONS
567-------------------------------
568
569The following sections describe the functions and variables available
570via the various perf trace Python modules. To use the functions and
571variables from the given module, add the corresponding 'from XXXX
572import' line to your perf trace script.
573
574Core.py Module
575~~~~~~~~~~~~~~
576
577These functions provide some essential functions to user scripts.
578
579The *flag_str* and *symbol_str* functions provide human-readable
580strings for flag and symbolic fields. These correspond to the strings
581and values parsed from the 'print fmt' fields of the event format
582files:
583
584 flag_str(event_name, field_name, field_value) - returns the string represention corresponding to field_value for the flag field field_name of event event_name
585 symbol_str(event_name, field_name, field_value) - returns the string represention corresponding to field_value for the symbolic field field_name of event event_name
586
587The *autodict* function returns a special special kind of Python
588dictionary that implements Perl's 'autovivifying' hashes in Python
589i.e. with autovivifying hashes, you can assign nested hash values
590without having to go to the trouble of creating intermediate levels if
591they don't exist.
592
593 autodict() - returns an autovivifying dictionary instance
594
595
596perf_trace_context Module
597~~~~~~~~~~~~~~~~~~~~~~~~~
598
599Some of the 'common' fields in the event format file aren't all that
600common, but need to be made accessible to user scripts nonetheless.
601
602perf_trace_context defines a set of functions that can be used to
603access this data in the context of the current event. Each of these
604functions expects a context variable, which is the same as the
605context variable passed into every event handler as the second
606argument.
607
608 common_pc(context) - returns common_preempt count for the current event
609 common_flags(context) - returns common_flags for the current event
610 common_lock_depth(context) - returns common_lock_depth for the current event
611
612Util.py Module
613~~~~~~~~~~~~~~
614
615Various utility functions for use with perf trace:
616
617 nsecs(secs, nsecs) - returns total nsecs given secs/nsecs pair
618 nsecs_secs(nsecs) - returns whole secs portion given nsecs
619 nsecs_nsecs(nsecs) - returns nsecs remainder given nsecs
620 nsecs_str(nsecs) - returns printable string in the form secs.nsecs
621 avg(total, n) - returns average given a sum and a total number of values
622
623SEE ALSO
624--------
625linkperf:perf-trace[1]
diff --git a/tools/perf/Documentation/perf-trace.txt b/tools/perf/Documentation/perf-trace.txt
index 41ed75398ca9..8879299cd9df 100644
--- a/tools/perf/Documentation/perf-trace.txt
+++ b/tools/perf/Documentation/perf-trace.txt
@@ -8,18 +8,63 @@ perf-trace - Read perf.data (created by perf record) and display trace output
8SYNOPSIS 8SYNOPSIS
9-------- 9--------
10[verse] 10[verse]
11'perf trace' [-i <file> | --input=file] symbol_name 11'perf trace' {record <script> | report <script> [args] }
12 12
13DESCRIPTION 13DESCRIPTION
14----------- 14-----------
15This command reads the input file and displays the trace recorded. 15This command reads the input file and displays the trace recorded.
16 16
17There are several variants of perf trace:
18
19 'perf trace' to see a detailed trace of the workload that was
20 recorded.
21
22 You can also run a set of pre-canned scripts that aggregate and
23 summarize the raw trace data in various ways (the list of scripts is
24 available via 'perf trace -l'). The following variants allow you to
25 record and run those scripts:
26
27 'perf trace record <script>' to record the events required for 'perf
28 trace report'. <script> is the name displayed in the output of
29 'perf trace --list' i.e. the actual script name minus any language
30 extension.
31
32 'perf trace report <script>' to run and display the results of
33 <script>. <script> is the name displayed in the output of 'perf
34 trace --list' i.e. the actual script name minus any language
35 extension. The perf.data output from a previous run of 'perf trace
36 record <script>' is used and should be present for this command to
37 succeed.
38
39 See the 'SEE ALSO' section for links to language-specific
40 information on how to write and run your own trace scripts.
41
17OPTIONS 42OPTIONS
18------- 43-------
19-D:: 44-D::
20--dump-raw-trace=:: 45--dump-raw-trace=::
21 Display verbose dump of the trace data. 46 Display verbose dump of the trace data.
22 47
48-L::
49--Latency=::
50 Show latency attributes (irqs/preemption disabled, etc).
51
52-l::
53--list=::
54 Display a list of available trace scripts.
55
56-s ['lang']::
57--script=::
58 Process trace data with the given script ([lang]:script[.ext]).
59 If the string 'lang' is specified in place of a script name, a
60 list of supported languages will be displayed instead.
61
62-g::
63--gen-script=::
64 Generate perf-trace.[ext] starter script for given language,
65 using current perf.data.
66
23SEE ALSO 67SEE ALSO
24-------- 68--------
25linkperf:perf-record[1] 69linkperf:perf-record[1], linkperf:perf-trace-perl[1],
70linkperf:perf-trace-python[1]
diff --git a/tools/perf/Documentation/perf.txt b/tools/perf/Documentation/perf.txt
index 69c832557199..0eeb247dc7d2 100644
--- a/tools/perf/Documentation/perf.txt
+++ b/tools/perf/Documentation/perf.txt
@@ -12,7 +12,7 @@ SYNOPSIS
12 12
13DESCRIPTION 13DESCRIPTION
14----------- 14-----------
15Performance counters for Linux are are a new kernel-based subsystem 15Performance counters for Linux are a new kernel-based subsystem
16that provide a framework for all things performance analysis. It 16that provide a framework for all things performance analysis. It
17covers hardware level (CPU/PMU, Performance Monitoring Unit) features 17covers hardware level (CPU/PMU, Performance Monitoring Unit) features
18and software features (software counters, tracepoints) as well. 18and software features (software counters, tracepoints) as well.