aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJonathan Herman <hermanjl@cs.unc.edu>2013-03-19 16:05:41 -0400
committerJonathan Herman <hermanjl@cs.unc.edu>2013-03-19 16:05:41 -0400
commit7875daab4c236841ec03322c130fc2b0927745de (patch)
tree99bb38a6cd1d2e6d6d73ccfcc32511f32ccc8f73
parentd1d41b7293feeb79c55bbc7abc7d0b59a29b9734 (diff)
Formatted first half of README.
-rw-r--r--README.md282
-rwxr-xr-xrun_exps.py11
2 files changed, 150 insertions, 143 deletions
diff --git a/README.md b/README.md
index e1a0815..57d9afa 100644
--- a/README.md
+++ b/README.md
@@ -1,118 +1,97 @@
1I. INTRODUCTION 1# About
2These scripts provide a common way for creating, running, parsing, and 2These Python scripts provide a common way for creating, running, parsing, and plotting experiments using [LITMUS^RT][litmus]. These scripts are:
3plotting experiments under LITMUS^RT. They are designed with the
4following principles in mind:
5
61. Little or no configuration: all scripts use certain parameters to
7configure behavior. However, if the user does not give these
8parameters, the scripts will examine the properties of the user's
9system to pick a suitable default. Requiring user input is a last
10resort.
11
122. Interruptability: the scripts save their work as they evaluate
13multiple directories. When the scripts are interrupted, or if new data
14is added to those directories, the scripts can be re-run and they will
15resume where they left off. This vastly decreases turnaround time for
16testing new features.
17
183. Maximum Safety: where possible, scripts save metadata in their output
19directories about the data contained. This metadata can be used by
20the other scripts to safely use the data later.
21
224. Independence / legacy support: none of these scripts assume their
23input was generated by another of these scripts. Three are designed to
24recognize generic input formats inspired by past LITMUS^RT
25experimental setups. (The exception to this is gen_exps.py, which
26has only user intput and creates output only for run_exps.py)
27
285. Save everything: all output and parameters (even from subprocesses)
29is saved for debugging / reproducability. This data is saved in tmp/
30directories while scripts are running in case scripts fail.
31
32These scripts require that the following repos are in the user's PATH:
331. liblitmus - for real-time executable simulation and task set release
342. feather-trace-tools - for recording and parsing overheads and
35 scheduling events
36
37Optionally, additional features will be enabled if these repos are
38present in the PATH:
391. rt-kernelshark - to record ftrace events for kernelshark visualization
402. sched_trace - to output a file containing scheduling events as
41strings
42
43Each of these scripts is designed to operate independently of the
44others. For example, the parse_exps.py will find any feather trace
45files resembling ft-xyz.bin or xyz.ft and print out overhead
46statistics for the records inside. However, the scripts provide the
47most features (especially safety) when their results are chained
48together, like so:
49 3
41. `gen_exps.py`: for creating sets of experiments
52. `run_exps.py`: for running and tracing experiments
63. `parse_exps.py`: for parsing LITMUS^RT trace data
74. `plot_exps.py`: for plotting directories of csv data
8
9They are designed with the following principles in mind:
10
111. Little or no configuration: all scripts use certain parameters to configure behavior. However, if the user does not give these parameters, the scripts will examine the properties of the user's system to pick a suitable default. Requiring user input is a last resort.
12
132. Interruptability: the scripts save their work as they evaluate multiple directories. When the scripts are interrupted, or if new data is added to those directories, the scripts can be re-run and they will resume where they left off. This vastly decreases turnaround time for testing new features.
14
153. Maximum Safety: where possible, scripts save metadata in their output directories about the data contained. This metadata can be used by the other scripts to safely use the data later.
16
174. Independence / legacy support: none of these scripts assume their input was generated by another of these scripts. Three are designed to recognize generic input formats inspired by past LITMUS^RT experimental setups. (The exception to this is gen_exps.py, which has only user intput and creates output only for run_exps.py)
18
195. Save everything: all output and parameters (even from subprocesses) is saved for debugging / reproducability. This data is saved in tmp/ directories while scripts are running in case scripts fail.
20
21# Dependencies
22These scripts were tested using Python 2.7.2. They have not been tested using Python 3. The [Matplotlib][matplotlib] Python library is needed for plotting.
23
24The `run_exps.py` script should almost always be run using a LITMUS^RT kernel. In addition to the kernel, the following LITMUS-related repos must be in the user's `PATH`:
251. [liblitmus][liblitmus]: for real-time executable simulation and task set release
262. [feather-trace-tools][feather-trace-tools]: for recording and parsing overheads and scheduling events
27
28Additional features will be enabled if these repos are present in the `PATH`:
291. [rt-kernelshark][rt-kernelshark]: to record ftrace events for kernelshark visualization
302. sched_trace ([UNC internal][rtunc]) to output a file containing scheduling events as strings
31
32# Details
33Each of these scripts is designed to operate independently of the others. For example, `parse_exps.py` will find any feather trace files resembling `ft-xyz.bin` or `xyz.ft` and print out overhead statistics for the records inside. However, the scripts provide the most features (especially safety) when their results are chained together, like so:
34
35```
50gen_exps.py --> [exps/*] --> run_exps.py --> [run-data/*] --. 36gen_exps.py --> [exps/*] --> run_exps.py --> [run-data/*] --.
51.------------------------------------------------------------' 37.------------------------------------------------------------'
52'--> parse_exps.py --> [parse-data/*] --> plot_exps.py --> [plot-data/*.pdf] 38'--> parse_exps.py --> [parse-data/*] --> plot_exps.py --> [plot-data/*.pdf]
39```
40
411. Create experiments with `gen_exps.py` or some other script.
422. Run experiments using `run_exps.py`, generating binary files in `run-data/`.
433. Parse binary data in `run-data/` using `parse_exps.py`, generating csv files in `parse-data/`.
444. Plot `parse-data` using `plot_exps.py`, generating pdfs in `plot-data/`.
45
46Each of these scripts will be described. The `run_exps.py` script is first because `gen_exps.py` creates schedule files which depend on `run_exps.py`.
47
53 48
540. Create experiments with gen_exps.py or some other script. 49## run_exps.py
551. Run experiments using run_exps.py, generating binary files in run-data/. 50*Usage*: `run_exps.py [OPTIONS] [SCHED_FILE]... [SCHED_DIR]...`
562. Parse binary data in run-data using parse_exps.py, generating csv 51
57 files in parse-data/. 52where a `SCHED_DIR` resembles:
583. Plot parse-data using plot_exps.py, generating pdfs in plot-data. 53```
59 54SCHED_DIR/
60Each of these scripts will be described. The run_exps.py script is 55 SCHED_FILE
61first because gen_exps.py creates schedule files which depend on run_exps.py. 56 PARAM_FILE
62 57```
63 58
64II. RUN_EXPS 59*Output*: `OUT_DIR/[files]` or `OUT_DIR/SCHED_DIR/[files]` or `OUT_DIR/SCHED_FILE/[files]` depending on input
65Usage: run_exps.py [OPTIONS] [SCHED_FILE]... [SCHED_DIR]... 60
66 where a SCHED_DIR resembles: 61If all features are enabled, these files are:
67 SCHED_DIR/ 62```
68 SCHED_FILE 63OUT_DIR/[SCHED_(FILE|DIR)/]
69 PARAM_FILE 64 trace.slog # LITMUS logging
70 65 st-[1..m].bin # sched_trace data
71Output: OUT_DIR/[files] or OUT_DIR/SCHED_DIR/[files] or 66 ft.bin # feather-trace overhead data
72 OUT_DIR/SCHED_FILE/[files] depending on input 67 trace.dat # ftrace data for kernelshark
73 If all features are enabled, these files are: 68 params.py # Schedule parameters
74 OUT_DIR/[.*/] 69 exec-out.txt # Standard out from schedule processes
75 trace.slog # LITMUS logging 70 exec-err.txt # Standard err '''
76 st-[1..m].bin # sched_trace data 71```
77 ft.bin # feather-trace overhead data 72
78 trace.dat # ftrace data for kernelshark 73*Defaults*: `SCHED_FILE = sched.py, PARAM_FILE = params.py, DURATION = 30, OUT_DIR = run-data/`
79 params.py # Schedule parameters 74
80 exec-out.txt # Standard out from schedule processes 75This script reads *schedule files* (described below) and executes real-time task systems, recording all overhead, logging, and trace data which is enabled in the system. For example, if trace logging is enabled, rt-kernelshark is found in the path, but feather-trace is disabled (the devices are not present), only trace logs and rt-kernelshark logs will be recorded.
81 exec-err.txt # Standard err ''' 76
82 77When `run_exps.py` is running a schedule file, temporary data is saved in a `tmp` directory in the same directory as the schedule file. When execution completes, this data is moved into a directory under the `run_exps.py` output directory (default: `run-data/`, can be changed with the `-o` option). When multiple schedules are run, each schedule's data is saved in a unique directory under the output directory.
83Defaults: SCHED_FILE = sched.py, PARAM_FILE = params.py, 78
84 DURATION = 30, OUT_DIR = run-data/ 79If a schedule has been run and it's data is in the output directory, `run_exps.py` will not re-run the schedule unless the `-f` option is specified. This is useful if your system crashes midway through a set of experiments.
85
86The run_exps.py script reads schedule files and executes real-time
87task systems, recording all overhead, logging, and trace data which is
88enabled in the system. For example, if trace logging is enabled,
89rt-kernelshark is found in the path, but feather-trace is disabled
90(the devices are not present), only trace-logs and kernelshark logs
91will be recorded.
92
93When run_exps.py is running a schedule file, temporary data is saved
94in a 'tmp' directory in the same directory as the schedule file. When
95execution completes, this data is moved into a directory under the
96run_exps.py output directory (default: 'run-data/', can be changed with
97the -o option). When multiple schedules are run, each schedule's data
98is saved in a unique directory under the output directory.
99
100If a schedule has been run and it's data is in the output directory,
101run_exps.py will not re-run the schedule unless the -f option is
102specified. This is useful if your system crashes midway through a set
103of experiments.
104 80
105Schedule files have one of the following two formats: 81Schedule files have one of the following two formats:
106 82
107a) simple format 831. simple format
84```
108 path/to/proc{proc_value} 85 path/to/proc{proc_value}
109 ... 86 ...
110 path/to/proc{proc_value} 87 path/to/proc{proc_value}
111 [real_time_task: default rtspin] task_arguments... 88 [real_time_task: default rtspin] task_arguments...
112 ... 89 ...
113 [real_time_task] task_arguments... 90 [real_time_task] task_arguments...
91```
114 92
115b) python format 93b) python format
94```python
116 {'proc':[ 95 {'proc':[
117 ('path/to/proc','proc_value'), 96 ('path/to/proc','proc_value'),
118 ..., 97 ...,
@@ -124,55 +103,67 @@ b) python format
124 ('real_time_task', 'task_arguments') 103 ('real_time_task', 'task_arguments')
125 ] 104 ]
126 } 105 }
106```
127 107
128The following creates a simple 3-task system with utilization 2.0, 108The following creates a simple 3-task system with utilization 2.0, which is then run under the `GSN-EDF` plugin:
129which is then run under the GSN-EDF plugin:
130 109
110```bash
131$ echo "10 20 111$ echo "10 20
13230 40 11230 40
13360 90" > test.sched 11360 90" > test.sched
134$ run_exps.py -s GSN-EDF test.sched 114$ run_exps.py -s GSN-EDF test.sched
135 115[Exp test/test.sched]: Enabling sched_trace
136The following will write a release master using 116...
137/proc/litmus/release_master: 117[Exp test/test.sched]: Switching to GSN-EDF
138 118[Exp test/test.sched]: Starting 3 tracers
119[Exp test/test.sched]: Starting the programs
120[Exp test/test.sched]: Sleeping until tasks are ready for release...
121[Exp test/test.sched]: Releasing 3 tasks
122[Exp test/test.sched]: Waiting for program to finish...
123[Exp test/test.sched]: Saving results in /root/schedules/test/run-data/test.sched
124[Exp test/test.sched]: Stopping tracers
125[Exp test/test.sched]: Switching to Linux scheduler
126[Exp test/test.sched]: Experiment done!
127Experiments run: 1
128 Successful: 1
129 Failed: 0
130 Already Done: 0
131 Invalid environment: 0
132
133```
134
135The following will write a release master using `/proc/litmus/release_master`:
136
137```bash
139$ echo "release_master{2} 138$ echo "release_master{2}
14010 20" > test.sched && run_exps.py -s GSN-EDF test.sched 13910 20" > test.sched && run_exps.py -s GSN-EDF test.sched
140```
141 141
142A longer form can be used for proc entries not in /proc/litmus: 142A longer form can be used for proc entries not under `/proc/litmus`:
143 143
144```bash
144$ echo "/proc/sys/something{hello}" 145$ echo "/proc/sys/something{hello}"
14510 20" > test.sched 14610 20" > test.sched
147```
146 148
147You can specify your own spin programs to run as well instead of 149You can specify your own spin programs to run as well instead of rtspin by putting their name at the beginning of the line. This example also shows how you can reference files in the same directory as the schedule file on the command line.
148rtspin by putting their name at the beginning of the line.
149 150
151```bash
150$ echo "colorspin -f color1.csv 10 20" > test.sched 152$ echo "colorspin -f color1.csv 10 20" > test.sched
153```
151 154
152This example also shows how you can reference files in the same 155You can specify parameters for an experiment in a file instead of on the command line using params.py (the `-p` option lets you choose the name of this file if params.py is not for you):
153directory as the schedule file on the command line.
154
155You can specify parameters for an experiment in a file instead of on
156the command line using params.py (the -p option lets you choose the
157name of this file if params.py is not for you):
158 156
157```bash
159$ echo "{'scheduler':'GSN-EDF', 'duration':10}" > params.py 158$ echo "{'scheduler':'GSN-EDF', 'duration':10}" > params.py
160$ run_exps.py test.sched 159$ run_exps.py test.sched
160```
161 161
162You can also run multiple experiments with a single command, provided 162You can also run multiple experiments with a single command, provided a directory with a schedule file exists for each. By default, the program will look for sched.py for the schedule file and params.py for the parameter file, but this behavior can be changed using the `-p` and `-c` options.
163a directory with a schedule file exists for each. By default, the
164program will look for sched.py for the schedule file and params.py for
165the parameter file, but this behavior can be changed using the -p and
166-c options.
167
168You can include non-relevant parameters which run_exps.py does not
169understand in params.py. These parameters will be saved with the data
170output by run_exps.py. This is useful for tracking variations in
171system parameters versus experimental results.
172 163
173In the following example, multiple experiments are demonstrated and an 164You can include non-relevant parameters which `run_exps.py` does not understand in `params.py`. These parameters will be saved with the data output by `run_exps.py`. This is useful for tracking variations in system parameters versus experimental results. In the following example, multiple experiments are demonstrated and an extra parameter `test-param` is included:
174extra parameter 'test-param' is included:
175 165
166```bash
176$ mkdir test1 167$ mkdir test1
177# The duration will default to 30 and need not be specified 168# The duration will default to 30 and need not be specified
178$ echo "{'scheduler':'C-EDF', 'test-param':1} > test1/params.py 169$ echo "{'scheduler':'C-EDF', 'test-param':1} > test1/params.py
@@ -180,31 +171,42 @@ $ echo "10 20" > test1/sched.py
180$ cp -r test1 test2 171$ cp -r test1 test2
181$ echo "{'scheduler':'GSN-EDF', 'test-param':2}"> test2/params.py 172$ echo "{'scheduler':'GSN-EDF', 'test-param':2}"> test2/params.py
182$ run_exps.py test* 173$ run_exps.py test*
174```
183 175
184Finally, you can specify system properties in params.py which the 176Finally, you can specify system properties in `params.py` which the environment must match for the experiment to run. These are useful if you have a large batch of experiments which must be run under different kernels or kernel configurations. The first property is a regular expression for the name of the kernel:Invalid environment for experiment 'test.sched'
185environment must match for the experiment to run. These are useful if 177Kernel name does not match '.*linux.*'.
186you have a large batch of experiments which must be run under 178Experiments run: 1
187different kernels. The first property is a regular expression for the 179 Successful: 0
188uname of the system: 180 Failed: 0
181 Already Done: 0
182 Invalid Environment: 1
189 183
184```bash
190$ uname -r 185$ uname -r
1913.0.0-litmus 1863.0.0-litmus
192$ cp params.py old_params.py 187$ cp params.py old_params.py
193$ echo "{'uname': r'.*linux.*'}" >> params.py 188$ echo "{'uname': r'.*linux.*'}" >> params.py
194# run_exps.py will now complain of an invalid environment for this 189$ run_exps.py -s GSN-EDF test.sched
195experiment 190Invalid environment for experiment 'test.sched'
191Kernel name does not match '.*linux.*'.
192Experiments run: 1
193 Successful: 0
194 Failed: 0
195 Already Done: 0
196 Invalid Environment: 1
196$ cp old_params.py params.py 197$ cp old_params.py params.py
197$ echo "{'uname': r'.*litmus.*'}" >> params.py 198$ echo "{'uname': r'.*litmus.*'}" >> params.py
198# run_exps.py will now succeed 199# run_exps.py will now succeed
200```
199 201
200The second property are kernel configuration options. These assume the 202The second property is kernel configuration options. These assume the configuration is stored at `/boot/config-```uname -r`` `. You can specify these like so:
201configuration is stored at /boot/config-`uname -r`. You can specify
202these like so:
203 203
204```bash
205# Only executes on ARM systems with the release master enabled
204$ echo "{'config-options':{ 206$ echo "{'config-options':{
205'RELEASE_MASTER' : 'y', 207'RELEASE_MASTER' : 'y',
206'ARM' : 'y'}}" >> params.py 208'ARM' : 'y'}}" >> params.py
207# Only executes on ARM systems with the release master enabled 209```
208 210
209 211
210III. GEN_EXPS 212III. GEN_EXPS
@@ -505,3 +507,9 @@ However, when a single directory of directories is given, the script
505assumes the experiments are related and can make line styles match in 507assumes the experiments are related and can make line styles match in
506different plots and more effectively parallelize the plotting. 508different plots and more effectively parallelize the plotting.
507 509
510[litmus]: https://github.com/LITMUS-RT/litmus-rt
511[liblitmus]: https://github.com/LITMUS-RT/liblitmus
512[rt-kernelshark]: https://github.com/LITMUS-RT/rt-kernelshark
513[feather-trace-tools]: https://github.com/LITMUS-RT/feather-trace-tools
514[rtunc]: http://www.cs.unc.edu/~anderson/real-time/
515[matplotlib]: http://matplotlib.org/ \ No newline at end of file
diff --git a/run_exps.py b/run_exps.py
index dc15701..6873877 100755
--- a/run_exps.py
+++ b/run_exps.py
@@ -15,12 +15,11 @@ from run.experiment import Experiment,ExperimentDone
15from run.proc_entry import ProcEntry 15from run.proc_entry import ProcEntry
16 16
17class InvalidKernel(Exception): 17class InvalidKernel(Exception):
18 def __init__(self, kernel, wanted): 18 def __init__(self, kernel):
19 self.kernel = kernel 19 self.kernel = kernel
20 self.wanted = wanted
21 20
22 def __str__(self): 21 def __str__(self):
23 return "Kernel '%s' does not match '%s'." % (self.kernel, self.wanted) 22 return "Kernel name does not match '%s'." % self.kernel
24 23
25ConfigResult = namedtuple('ConfigResult', ['param', 'wanted', 'actual']) 24ConfigResult = namedtuple('ConfigResult', ['param', 'wanted', 'actual'])
26class InvalidConfig(Exception): 25class InvalidConfig(Exception):
@@ -119,7 +118,7 @@ def load_experiment(sched_file, scheduler, duration, param_file, out_dir):
119 exp_name = os.path.split(dir_name)[1] + "/" + fname 118 exp_name = os.path.split(dir_name)[1] + "/" + fname
120 119
121 params = {} 120 params = {}
122 kernel = "" 121 kernel = copts = ""
123 122
124 param_file = param_file or \ 123 param_file = param_file or \
125 "%s/%s" % (dir_name, conf.DEFAULTS['params_file']) 124 "%s/%s" % (dir_name, conf.DEFAULTS['params_file'])
@@ -259,7 +258,7 @@ def main():
259 print("Experiment '%s' already completed at '%s'" % (exp, out_base)) 258 print("Experiment '%s' already completed at '%s'" % (exp, out_base))
260 except (InvalidKernel, InvalidConfig) as e: 259 except (InvalidKernel, InvalidConfig) as e:
261 invalid += 1 260 invalid += 1
262 print("Invalid environment for experiment '%s'") 261 print("Invalid environment for experiment '%s'" % exp)
263 print(e) 262 print(e)
264 except: 263 except:
265 print("Failed experiment %s" % exp) 264 print("Failed experiment %s" % exp)
@@ -273,7 +272,7 @@ def main():
273 print(" Successful:\t\t%d" % succ) 272 print(" Successful:\t\t%d" % succ)
274 print(" Failed:\t\t%d" % failed) 273 print(" Failed:\t\t%d" % failed)
275 print(" Already Done:\t\t%d" % done) 274 print(" Already Done:\t\t%d" % done)
276 print(" Invalid environment:\t\t%d" % invalid) 275 print(" Invalid Environment:\t%d" % invalid)
277 276
278 277
279if __name__ == '__main__': 278if __name__ == '__main__':