aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>2014-09-29 08:02:11 -0400
committerSteven Rostedt <rostedt@goodmis.org>2014-10-03 16:44:02 -0400
commit915de2adb584acea89f3f654a6c9b329f682100f (patch)
tree14bdac4e5f087e2dc0642ce3a73d3f97864a759f
parent2909ef28b1d385210d4fef551499debc914f30e4 (diff)
ftracetest: Add POSIX.3 standard and XFAIL result codes
Add XFAIL and POSIX 1003.3 standard codes (UNRESOLVED/ UNTESTED/UNSUPPORTED) as result codes. These are used for the results that test case is expected to fail or unsupported feature (by config). To return these result code, this introduces exit_unresolved, exit_untested, exit_unsupported and exit_xfail functions, which use real-time signals to notify the result code to ftracetest. This also set "errexit" option for the testcases, so that the tests don't need to exit explicitly. Note that if the test returns UNRESOLVED/UNSUPPORTED/FAIL, its test log including executed commands is shown on console and main logfile as below. ------ # ./ftracetest samples/ === Ftrace unit tests === [1] failure-case example [FAIL] execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc ++ cat non-exist-file cat: non-exist-file: No such file or directory [2] pass-case example [PASS] [3] unresolved-case example [UNRESOLVED] execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc ++ trap exit_unresolved INT ++ kill -INT 29324 +++ exit_unresolved +++ kill -s 38 29265 +++ exit 0 [4] unsupported-case example [UNSUPPORTED] execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc ++ exit_unsupported ++ kill -s 40 29265 ++ exit 0 [5] untested-case example [UNTESTED] [6] xfail-case example [XFAIL] # of passed: 1 # of failed: 1 # of unresolved: 1 # of untested: 1 # of unsupported: 1 # of xfailed: 1 # of undefined(test bug): 0 ------ Link: http://lkml.kernel.org/p/20140929120211.30203.99510.stgit@kbuild-f20.novalocal Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-rw-r--r--tools/testing/selftests/ftrace/README37
-rwxr-xr-xtools/testing/selftests/ftrace/ftracetest124
-rw-r--r--tools/testing/selftests/ftrace/samples/fail.tc4
-rw-r--r--tools/testing/selftests/ftrace/samples/pass.tc3
-rw-r--r--tools/testing/selftests/ftrace/samples/unresolved.tc4
-rw-r--r--tools/testing/selftests/ftrace/samples/unsupported.tc3
-rw-r--r--tools/testing/selftests/ftrace/samples/untested.tc3
-rw-r--r--tools/testing/selftests/ftrace/samples/xfail.tc3
-rw-r--r--tools/testing/selftests/ftrace/test.d/00basic/basic2.tc3
-rw-r--r--tools/testing/selftests/ftrace/test.d/00basic/basic3.tc6
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc12
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc15
-rw-r--r--tools/testing/selftests/ftrace/test.d/template5
13 files changed, 189 insertions, 33 deletions
diff --git a/tools/testing/selftests/ftrace/README b/tools/testing/selftests/ftrace/README
index b8631f03e754..182e76fa4b82 100644
--- a/tools/testing/selftests/ftrace/README
+++ b/tools/testing/selftests/ftrace/README
@@ -38,6 +38,43 @@ extension) and rewrite the test description line.
38 * The test cases should run on dash (busybox shell) for testing on 38 * The test cases should run on dash (busybox shell) for testing on
39 minimal cross-build environments. 39 minimal cross-build environments.
40 40
41 * Note that the tests are run with "set -e" (errexit) option. If any
42 command fails, the test will be terminated immediately.
43
44 * The tests can return some result codes instead of pass or fail by
45 using exit_unresolved, exit_untested, exit_unsupported and exit_xfail.
46
47Result code
48===========
49
50Ftracetest supports following result codes.
51
52 * PASS: The test succeeded as expected. The test which exits with 0 is
53 counted as passed test.
54
55 * FAIL: The test failed, but was expected to succeed. The test which exits
56 with !0 is counted as failed test.
57
58 * UNRESOLVED: The test produced unclear or intermidiate results.
59 for example, the test was interrupted
60 or the test depends on a previous test, which failed.
61 or the test was set up incorrectly
62 The test which is in above situation, must call exit_unresolved.
63
64 * UNTESTED: The test was not run, currently just a placeholder.
65 In this case, the test must call exit_untested.
66
67 * UNSUPPORTED: The test failed because of lack of feature.
68 In this case, the test must call exit_unsupported.
69
70 * XFAIL: The test failed, and was expected to fail.
71 To return XFAIL, call exit_xfail from the test.
72
73There are some sample test scripts for result code under samples/.
74You can also run samples as below:
75
76 # ./ftracetest samples/
77
41TODO 78TODO
42==== 79====
43 80
diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
index 4c6c2fad2cda..a8f81c782856 100755
--- a/tools/testing/selftests/ftrace/ftracetest
+++ b/tools/testing/selftests/ftrace/ftracetest
@@ -114,22 +114,106 @@ prlog "=== Ftrace unit tests ==="
114 114
115 115
116# Testcase management 116# Testcase management
117# Test result codes - Dejagnu extended code
118PASS=0 # The test succeeded.
119FAIL=1 # The test failed, but was expected to succeed.
120UNRESOLVED=2 # The test produced indeterminate results. (e.g. interrupted)
121UNTESTED=3 # The test was not run, currently just a placeholder.
122UNSUPPORTED=4 # The test failed because of lack of feature.
123XFAIL=5 # The test failed, and was expected to fail.
124
125# Accumulations
117PASSED_CASES= 126PASSED_CASES=
118FAILED_CASES= 127FAILED_CASES=
128UNRESOLVED_CASES=
129UNTESTED_CASES=
130UNSUPPORTED_CASES=
131XFAILED_CASES=
132UNDEFINED_CASES=
133TOTAL_RESULT=0
134
119CASENO=0 135CASENO=0
120testcase() { # testfile 136testcase() { # testfile
121 CASENO=$((CASENO+1)) 137 CASENO=$((CASENO+1))
122 prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:` 138 prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:`
123} 139}
124failed() { 140
125 prlog " [FAIL]" 141eval_result() { # retval sigval
126 FAILED_CASES="$FAILED_CASES $CASENO" 142 local retval=$2
143 if [ $2 -eq 0 ]; then
144 test $1 -ne 0 && retval=$FAIL
145 fi
146 case $retval in
147 $PASS)
148 prlog " [PASS]"
149 PASSED_CASES="$PASSED_CASES $CASENO"
150 return 0
151 ;;
152 $FAIL)
153 prlog " [FAIL]"
154 FAILED_CASES="$FAILED_CASES $CASENO"
155 return 1 # this is a bug.
156 ;;
157 $UNRESOLVED)
158 prlog " [UNRESOLVED]"
159 UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
160 return 1 # this is a kind of bug.. something happened.
161 ;;
162 $UNTESTED)
163 prlog " [UNTESTED]"
164 UNTESTED_CASES="$UNTESTED_CASES $CASENO"
165 return 0
166 ;;
167 $UNSUPPORTED)
168 prlog " [UNSUPPORTED]"
169 UNSUPPORTED_CASES="$UNSUPPORTED_CASES $CASENO"
170 return 1 # this is not a bug, but the result should be reported.
171 ;;
172 $XFAIL)
173 prlog " [XFAIL]"
174 XFAILED_CASES="$XFAILED_CASES $CASENO"
175 return 0
176 ;;
177 *)
178 prlog " [UNDEFINED]"
179 UNDEFINED_CASES="$UNDEFINED_CASES $CASENO"
180 return 1 # this must be a test bug
181 ;;
182 esac
183}
184
185# Signal handling for result codes
186SIG_RESULT=
187SIG_BASE=36 # Use realtime signals
188SIG_PID=$$
189
190SIG_UNRESOLVED=$((SIG_BASE + UNRESOLVED))
191exit_unresolved () {
192 kill -s $SIG_UNRESOLVED $SIG_PID
193 exit 0
194}
195trap 'SIG_RESULT=$UNRESOLVED' $SIG_UNRESOLVED
196
197SIG_UNTESTED=$((SIG_BASE + UNTESTED))
198exit_untested () {
199 kill -s $SIG_UNTESTED $SIG_PID
200 exit 0
127} 201}
128passed() { 202trap 'SIG_RESULT=$UNTESTED' $SIG_UNTESTED
129 prlog " [PASS]" 203
130 PASSED_CASES="$PASSED_CASES $CASENO" 204SIG_UNSUPPORTED=$((SIG_BASE + UNSUPPORTED))
205exit_unsupported () {
206 kill -s $SIG_UNSUPPORTED $SIG_PID
207 exit 0
131} 208}
209trap 'SIG_RESULT=$UNSUPPORTED' $SIG_UNSUPPORTED
132 210
211SIG_XFAIL=$((SIG_BASE + XFAIL))
212exit_xfail () {
213 kill -s $SIG_XFAIL $SIG_PID
214 exit 0
215}
216trap 'SIG_RESULT=$XFAIL' $SIG_XFAIL
133 217
134# Run one test case 218# Run one test case
135run_test() { # testfile 219run_test() { # testfile
@@ -137,14 +221,17 @@ run_test() { # testfile
137 local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log` 221 local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log`
138 testcase $1 222 testcase $1
139 echo "execute: "$1 > $testlog 223 echo "execute: "$1 > $testlog
140 (cd $TRACING_DIR; set -x ; . $1) >> $testlog 2>&1 224 SIG_RESULT=0
141 ret=$? 225 # setup PID and PPID, $$ is not updated.
142 if [ $ret -ne 0 ]; then 226 (cd $TRACING_DIR; read PID _ < /proc/self/stat ;
143 failed 227 set -e; set -x; . $1) >> $testlog 2>&1
144 catlog $testlog 228 eval_result $? $SIG_RESULT
145 else 229 if [ $? -eq 0 ]; then
146 passed 230 # Remove test log if the test was done as it was expected.
147 [ $KEEP_LOG -eq 0 ] && rm $testlog 231 [ $KEEP_LOG -eq 0 ] && rm $testlog
232 else
233 catlog $testlog
234 TOTAL_RESULT=1
148 fi 235 fi
149} 236}
150 237
@@ -152,8 +239,15 @@ run_test() { # testfile
152for t in $TEST_CASES; do 239for t in $TEST_CASES; do
153 run_test $t 240 run_test $t
154done 241done
242
155prlog "" 243prlog ""
156prlog "# of passed: " `echo $PASSED_CASES | wc -w` 244prlog "# of passed: " `echo $PASSED_CASES | wc -w`
157prlog "# of failed: " `echo $FAILED_CASES | wc -w` 245prlog "# of failed: " `echo $FAILED_CASES | wc -w`
158 246prlog "# of unresolved: " `echo $UNRESOLVED_CASES | wc -w`
159test -z "$FAILED_CASES" # if no error, return 0 247prlog "# of untested: " `echo $UNTESTED_CASES | wc -w`
248prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
249prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
250prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`
251
252# if no error, return 0
253exit $TOTAL_RESULT
diff --git a/tools/testing/selftests/ftrace/samples/fail.tc b/tools/testing/selftests/ftrace/samples/fail.tc
new file mode 100644
index 000000000000..15e35b956e05
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/fail.tc
@@ -0,0 +1,4 @@
1#!/bin/sh
2# description: failure-case example
3cat non-exist-file
4echo "this is not executed"
diff --git a/tools/testing/selftests/ftrace/samples/pass.tc b/tools/testing/selftests/ftrace/samples/pass.tc
new file mode 100644
index 000000000000..d01549370041
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/pass.tc
@@ -0,0 +1,3 @@
1#!/bin/sh
2# description: pass-case example
3return 0
diff --git a/tools/testing/selftests/ftrace/samples/unresolved.tc b/tools/testing/selftests/ftrace/samples/unresolved.tc
new file mode 100644
index 000000000000..41e99d3358d1
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/unresolved.tc
@@ -0,0 +1,4 @@
1#!/bin/sh
2# description: unresolved-case example
3trap exit_unresolved INT
4kill -INT $PID
diff --git a/tools/testing/selftests/ftrace/samples/unsupported.tc b/tools/testing/selftests/ftrace/samples/unsupported.tc
new file mode 100644
index 000000000000..45910ff13328
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/unsupported.tc
@@ -0,0 +1,3 @@
1#!/bin/sh
2# description: unsupported-case example
3exit_unsupported
diff --git a/tools/testing/selftests/ftrace/samples/untested.tc b/tools/testing/selftests/ftrace/samples/untested.tc
new file mode 100644
index 000000000000..35a45946ec60
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/untested.tc
@@ -0,0 +1,3 @@
1#!/bin/sh
2# description: untested-case example
3exit_untested
diff --git a/tools/testing/selftests/ftrace/samples/xfail.tc b/tools/testing/selftests/ftrace/samples/xfail.tc
new file mode 100644
index 000000000000..9dd395323259
--- /dev/null
+++ b/tools/testing/selftests/ftrace/samples/xfail.tc
@@ -0,0 +1,3 @@
1#!/bin/sh
2# description: xfail-case example
3cat non-exist-file || exit_xfail
diff --git a/tools/testing/selftests/ftrace/test.d/00basic/basic2.tc b/tools/testing/selftests/ftrace/test.d/00basic/basic2.tc
index b04f30df0db3..bf9a7b037924 100644
--- a/tools/testing/selftests/ftrace/test.d/00basic/basic2.tc
+++ b/tools/testing/selftests/ftrace/test.d/00basic/basic2.tc
@@ -1,6 +1,7 @@
1#!/bin/sh 1#!/bin/sh
2# description: Basic test for tracers 2# description: Basic test for tracers
3test -f available_tracers
3for t in `cat available_tracers`; do 4for t in `cat available_tracers`; do
4 echo $t > current_tracer || exit 1 5 echo $t > current_tracer
5done 6done
6echo nop > current_tracer 7echo nop > current_tracer
diff --git a/tools/testing/selftests/ftrace/test.d/00basic/basic3.tc b/tools/testing/selftests/ftrace/test.d/00basic/basic3.tc
index 0c1a3a207636..bde6625d9785 100644
--- a/tools/testing/selftests/ftrace/test.d/00basic/basic3.tc
+++ b/tools/testing/selftests/ftrace/test.d/00basic/basic3.tc
@@ -1,8 +1,8 @@
1#!/bin/sh 1#!/bin/sh
2# description: Basic trace clock test 2# description: Basic trace clock test
3[ -f trace_clock ] || exit 1 3test -f trace_clock
4for c in `cat trace_clock | tr -d \[\]`; do 4for c in `cat trace_clock | tr -d \[\]`; do
5 echo $c > trace_clock || exit 1 5 echo $c > trace_clock
6 grep '\['$c'\]' trace_clock || exit 1 6 grep '\['$c'\]' trace_clock
7done 7done
8echo local > trace_clock 8echo local > trace_clock
diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc b/tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc
index 5ddfb476eceb..1b8b665ab2b3 100644
--- a/tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc
+++ b/tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc
@@ -1,11 +1,11 @@
1#!/bin/sh 1#!/bin/sh
2# description: Kprobe dynamic event - adding and removing 2# description: Kprobe dynamic event - adding and removing
3 3
4[ -f kprobe_events ] || exit 1 4[ -f kprobe_events ] || exit_unsupported # this is configurable
5 5
6echo 0 > events/enable || exit 1 6echo 0 > events/enable
7echo > kprobe_events || exit 1 7echo > kprobe_events
8echo p:myevent do_fork > kprobe_events || exit 1 8echo p:myevent do_fork > kprobe_events
9grep myevent kprobe_events || exit 1 9grep myevent kprobe_events
10[ -d events/kprobes/myevent ] || exit 1 10test -d events/kprobes/myevent
11echo > kprobe_events 11echo > kprobe_events
diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc b/tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc
index 588fde97e93f..b55c84003587 100644
--- a/tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc
+++ b/tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc
@@ -1,14 +1,13 @@
1#!/bin/sh 1#!/bin/sh
2# description: Kprobe dynamic event - busy event check 2# description: Kprobe dynamic event - busy event check
3 3
4[ -f kprobe_events ] || exit 1 4[ -f kprobe_events ] || exit_unsupported
5 5
6echo 0 > events/enable || exit 1 6echo 0 > events/enable
7echo > kprobe_events || exit 1 7echo > kprobe_events
8echo p:myevent do_fork > kprobe_events || exit 1 8echo p:myevent do_fork > kprobe_events
9[ -d events/kprobes/myevent ] || exit 1 9test -d events/kprobes/myevent
10echo 1 > events/kprobes/myevent/enable || exit 1 10echo 1 > events/kprobes/myevent/enable
11echo > kprobe_events && exit 1 # this must fail 11echo > kprobe_events && exit 1 # this must fail
12echo 0 > events/kprobes/myevent/enable || exit 1 12echo 0 > events/kprobes/myevent/enable
13echo > kprobe_events # this must succeed 13echo > kprobe_events # this must succeed
14
diff --git a/tools/testing/selftests/ftrace/test.d/template b/tools/testing/selftests/ftrace/test.d/template
index ce5f735b2e65..5448f7abad5f 100644
--- a/tools/testing/selftests/ftrace/test.d/template
+++ b/tools/testing/selftests/ftrace/test.d/template
@@ -1,4 +1,9 @@
1#!/bin/sh 1#!/bin/sh
2# description: %HERE DESCRIBE WHAT THIS DOES% 2# description: %HERE DESCRIBE WHAT THIS DOES%
3# you have to add ".tc" extention for your testcase file 3# you have to add ".tc" extention for your testcase file
4# Note that all tests are run with "errexit" option.
5
4exit 0 # Return 0 if the test is passed, otherwise return !0 6exit 0 # Return 0 if the test is passed, otherwise return !0
7# If the test could not run because of lack of feature, call exit_unsupported
8# If the test returned unclear results, call exit_unresolved
9# If the test is a dummy, or a placeholder, call exit_untested