aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.md263
1 files changed, 100 insertions, 163 deletions
diff --git a/README.md b/README.md
index 57d9afa..a9975d9 100644
--- a/README.md
+++ b/README.md
@@ -70,7 +70,7 @@ OUT_DIR/[SCHED_(FILE|DIR)/]
70 exec-err.txt # Standard err ''' 70 exec-err.txt # Standard err '''
71``` 71```
72 72
73*Defaults*: `SCHED_FILE = sched.py, PARAM_FILE = params.py, DURATION = 30, OUT_DIR = run-data/` 73*Defaults*: `SCHED_FILE = sched.py`, `PARAM_FILE = params.py`, `DURATION = 30`, `OUT_DIR = run-data/`
74 74
75This script reads *schedule files* (described below) and executes real-time task systems, recording all overhead, logging, and trace data which is enabled in the system. For example, if trace logging is enabled, rt-kernelshark is found in the path, but feather-trace is disabled (the devices are not present), only trace logs and rt-kernelshark logs will be recorded. 75This script reads *schedule files* (described below) and executes real-time task systems, recording all overhead, logging, and trace data which is enabled in the system. For example, if trace logging is enabled, rt-kernelshark is found in the path, but feather-trace is disabled (the devices are not present), only trace logs and rt-kernelshark logs will be recorded.
76 76
@@ -90,7 +90,7 @@ Schedule files have one of the following two formats:
90 [real_time_task] task_arguments... 90 [real_time_task] task_arguments...
91``` 91```
92 92
93b) python format 932. python format
94```python 94```python
95 {'proc':[ 95 {'proc':[
96 ('path/to/proc','proc_value'), 96 ('path/to/proc','proc_value'),
@@ -142,7 +142,7 @@ $ echo "release_master{2}
142A longer form can be used for proc entries not under `/proc/litmus`: 142A longer form can be used for proc entries not under `/proc/litmus`:
143 143
144```bash 144```bash
145$ echo "/proc/sys/something{hello}" 145$ echo "/proc/sys/something{hello}
14610 20" > test.sched 14610 20" > test.sched
147``` 147```
148 148
@@ -152,7 +152,7 @@ You can specify your own spin programs to run as well instead of rtspin by putti
152$ echo "colorspin -f color1.csv 10 20" > test.sched 152$ echo "colorspin -f color1.csv 10 20" > test.sched
153``` 153```
154 154
155You can specify parameters for an experiment in a file instead of on the command line using params.py (the `-p` option lets you choose the name of this file if params.py is not for you): 155You can specify parameters for an experiment in a file instead of on the command line using params.py (the `-p` option lets you choose the name of this file if `params.py` is not for you):
156 156
157```bash 157```bash
158$ echo "{'scheduler':'GSN-EDF', 'duration':10}" > params.py 158$ echo "{'scheduler':'GSN-EDF', 'duration':10}" > params.py
@@ -166,26 +166,19 @@ You can include non-relevant parameters which `run_exps.py` does not understand
166```bash 166```bash
167$ mkdir test1 167$ mkdir test1
168# The duration will default to 30 and need not be specified 168# The duration will default to 30 and need not be specified
169$ echo "{'scheduler':'C-EDF', 'test-param':1} > test1/params.py 169$ echo "{'scheduler':'C-EDF', 'test-param':1}" > test1/params.py
170$ echo "10 20" > test1/sched.py 170$ echo "10 20" > test1/sched.py
171$ cp -r test1 test2 171$ cp -r test1 test2
172$ echo "{'scheduler':'GSN-EDF', 'test-param':2}"> test2/params.py 172$ echo "{'scheduler':'GSN-EDF', 'test-param':2}"> test2/params.py
173$ run_exps.py test* 173$ run_exps.py test*
174``` 174```
175 175
176Finally, you can specify system properties in `params.py` which the environment must match for the experiment to run. These are useful if you have a large batch of experiments which must be run under different kernels or kernel configurations. The first property is a regular expression for the name of the kernel:Invalid environment for experiment 'test.sched' 176Finally, you can specify system properties in `params.py` which the environment must match for the experiment to run. These are useful if you have a large batch of experiments which must be run under different kernels or kernel configurations. The first property is a regular expression for the name of the kernel:
177Kernel name does not match '.*linux.*'.
178Experiments run: 1
179 Successful: 0
180 Failed: 0
181 Already Done: 0
182 Invalid Environment: 1
183 177
184```bash 178```bash
185$ uname -r 179$ uname -r
1863.0.0-litmus 1803.0.0-litmus
187$ cp params.py old_params.py 181$ echo "{'uname': r'.*linux.*'}" > params.py
188$ echo "{'uname': r'.*linux.*'}" >> params.py
189$ run_exps.py -s GSN-EDF test.sched 182$ run_exps.py -s GSN-EDF test.sched
190Invalid environment for experiment 'test.sched' 183Invalid environment for experiment 'test.sched'
191Kernel name does not match '.*linux.*'. 184Kernel name does not match '.*linux.*'.
@@ -194,51 +187,41 @@ Experiments run: 1
194 Failed: 0 187 Failed: 0
195 Already Done: 0 188 Already Done: 0
196 Invalid Environment: 1 189 Invalid Environment: 1
197$ cp old_params.py params.py 190$ echo "{'uname': r'.*litmus.*'}" > params.py
198$ echo "{'uname': r'.*litmus.*'}" >> params.py
199# run_exps.py will now succeed 191# run_exps.py will now succeed
200``` 192```
201 193
202The second property is kernel configuration options. These assume the configuration is stored at `/boot/config-```uname -r`` `. You can specify these like so: 194The second property is kernel configuration options. These assume the configuration is stored at `/boot/config-$(uname -r)`. You can specify these in `params.py` like so:
203 195
204```bash 196```python
205# Only executes on ARM systems with the release master enabled 197{'config-options':{
206$ echo "{'config-options':{ 198 'RELEASE_MASTER' : 'y',
207'RELEASE_MASTER' : 'y', 199 'ARM' : 'y'}
208'ARM' : 'y'}}" >> params.py 200}
209``` 201```
210 202
211 203
212III. GEN_EXPS 204## gen_exps.py
213Usage: gen_exps.py [options] [files...] [generators...] [param=val[,val]...] 205*Usage*: `gen_exps.py [options] [files...] [generators...] [param=val[,val]...]`
214Output: exps/EXP_DIRS which each contain sched.py and params.py
215Defaults: generators = G-EDF P-EDF C-EDF
216 206
217The gen_exps.py script uses 'generators', one for each LITMUS 207*Output*: OUT_DIR/EXP_DIRS which each contain sched.py and params.py
218scheduler supported, which each have different properties which can be
219varied to generate different types of schedules. Each of these
220properties has a default value which can be modified on the command
221line for quick and easy experiment generation.
222 208
223This script as written should be used to create debugging task sets, 209*Defaults*: `generators = G-EDF P-EDF C-EDF`, `OUT_DIR = exps/`
224but not for creating task sets for experiments shown in papers. That
225is because the safety features of run_exps.py described above (uname,
226config-options) are not used here. If you are creating experiments for
227a paper, you should create your own generator which outputs values for
228'config-options' required for your plugin so that you cannot ruin your
229experiments at run time.
230 210
231The -l option lists the supported generators which can be specified: 211This script uses *generators*, one for each LITMUS scheduler supported, which each have different properties which can be varied to generate different types of schedules. Each of these properties has a default value which can be modified on the command line for quick and easy experiment generation.
232 212
213This script as written should be used to create debugging task sets, but not for creating task sets for experiments shown in papers. That is because the safety features of `run_exps.py` described above (uname, config-options) are not used here. If you are creating experiments for a paper, you should create your own generator which outputs values for the `config-options` required for your plugin so that you cannot ruin your experiments at run time. Trust me, you will.
214
215The `-l` option lists the supported generators which can be specified:
216
217```bash
233$ gen_exps.py -l 218$ gen_exps.py -l
234G-EDF, P-EDF, C-EDF 219G-EDF, P-EDF, C-EDF
220```
235 221
236The -d option will describe the properties of a generator or 222The `-d` option will describe the properties of a generator or generators and their default values. Note that some of these defaults will vary depending on the system the script is run. For example, the `cpus` parameter defaults to the number of cpus on the current system, in this example 24.
237generators and their default values. Note that some of these defaults
238will vary depending on the system the script is run. For example,
239'cpus' defaults to the number of cpus on the current system, in this
240example 24.
241 223
224```bash
242$ gen_exps.py -d G-EDF,P-EDF 225$ gen_exps.py -d G-EDF,P-EDF
243Generator GSN-EDF: 226Generator GSN-EDF:
244 num_tasks -- Number of tasks per experiment. 227 num_tasks -- Number of tasks per experiment.
@@ -254,102 +237,80 @@ Generator PSN-EDF:
254 Default: [24] 237 Default: [24]
255 Allowed: <type 'int'> 238 Allowed: <type 'int'>
256 .... 239 ....
240```
257 241
258You create experiments by specifying a generator. The following will 242You create experiments by specifying a generator. The following will create experiments 4 schedules with 24, 48, 72, and 96 tasks, because the default value of `num_tasks` is an array of these values (see above).
259create experiments 4 schedules with 24, 48, 72, and 96 tasks, because
260the default value of num_tasks is an array of these values
261 243
244```bash
262$ gen_exps.py P-EDF 245$ gen_exps.py P-EDF
263$ ls exps/ 246$ ls exps/
264sched=GSN-EDF_num-tasks=24/ sched=GSN-EDF_num-tasks=48/ 247sched=GSN-EDF_num-tasks=24/ sched=GSN-EDF_num-tasks=48/
265sched=GSN-EDF_num-tasks=72/ sched=GSN-EDF_num-tasks=96/ 248sched=GSN-EDF_num-tasks=72/ sched=GSN-EDF_num-tasks=96/
249```
266 250
267You can modify the default using a single value (the -f option deletes 251You can modify the default using a single value (the `-f` option deletes previous experiments in the output directory, defaulting to `exps/`, changeable with `-o`):
268previous experiments in the output directory, defaulting to 'exps/',
269changeable with -o):
270 252
253```bash
271$ gen_exps.py -f P-EDF num_tasks=24 254$ gen_exps.py -f P-EDF num_tasks=24
272$ ls exps/ 255$ ls exps/
273sched=GSN-EDF_num-tasks=24/ 256sched=GSN-EDF_num-tasks=24/
257```
274 258
275Or with an array of values, specified as a comma-seperated list: 259Or with an array of values, specified as a comma-seperated list:
276 260
261```bash
277$ gen_exps.py -f num_tasks=`seq -s, 24 2 30` P-EDF 262$ gen_exps.py -f num_tasks=`seq -s, 24 2 30` P-EDF
278sched=PSN-EDF_num-tasks=24/ sched=PSN-EDF_num-tasks=26/ 263sched=PSN-EDF_num-tasks=24/ sched=PSN-EDF_num-tasks=26/
279sched=PSN-EDF_num-tasks=28/ sched=PSN-EDF_num-tasks=30/ 264sched=PSN-EDF_num-tasks=28/ sched=PSN-EDF_num-tasks=30/
265```
280 266
281The generator will create a different directory for each possible 267The generator will create a different directory for each possible configuration of the parameters. Each parameter which is varied is included in the name of the schedule directory. For example, to vary the number of CPUs but not the number of tasks:
282configuration of the parameters. Each parameter which is varied is
283included in the name of the schedule directory. For example, to vary
284the number of CPUs but not the number of tasks:
285 268
269```bash
286$ gen_exps.py -f num_tasks=24 cpus=3,6 P-EDF 270$ gen_exps.py -f num_tasks=24 cpus=3,6 P-EDF
287$ ls exps 271$ ls exps
288sched=PSN-EDF_cpus=3/ sched=PSN-EDF_cpus=6/ 272sched=PSN-EDF_cpus=3/ sched=PSN-EDF_cpus=6/
273```
289 274
290The values of non-varying parameters are still saved in 275The values of non-varying parameters are still saved in `params.py`. Continuing the example above:
291params.py. Continuing the example above:
292 276
277```bash
293$ cat exps/sched\=PSN-EDF_cpus\=3/params.py 278$ cat exps/sched\=PSN-EDF_cpus\=3/params.py
294{'periods': 'harmonic', 'release_master': False, 'duration': 30, 279{'periods': 'harmonic', 'release_master': False, 'duration': 30,
295 'utils': 'uni-medium', 'scheduler': 'PSN-EDF', 'cpus': 3} 280 'utils': 'uni-medium', 'scheduler': 'PSN-EDF', 'cpus': 3}
281```
296 282
297You can also have multiple schedules generated with the same 283You can also have multiple schedules generated with the same configuration using the `-n` option:
298configuration using the -n option:
299 284
285```bash
300$ gen_exps.py -f num_tasks=24 -n 5 P-EDF 286$ gen_exps.py -f num_tasks=24 -n 5 P-EDF
301$ ls exps/ 287$ ls exps/
302sched=PSN-EDF_trial=0/ sched=PSN-EDF_trial=1/ sched=PSN-EDF_trial=2/ 288sched=PSN-EDF_trial=0/ sched=PSN-EDF_trial=1/ sched=PSN-EDF_trial=2/
303sched=PSN-EDF_trial=3/ sched=PSN-EDF_trial=4/ 289sched=PSN-EDF_trial=3/ sched=PSN-EDF_trial=4/
304 290```
305 291
306IV. PARSE_EXPS 292IV. PARSE_EXPS
307Usage: parse_exps.py [options] [data_dir1] [data_dir2]... 293*Usage*: `parse_exps.py [options] [data_dir1] [data_dir2]...`
308 where data_dirs contain feather-trace and sched-trace data, 294
309 e.g. ft.bin, mysched.ft, or st-*.bin. 295where data_dirs contain feather-trace and sched-trace data, e.g. `ft.bin`, `mysched.ft`, or `st-*.bin`.
310 296
311Output: print out all parsed data or 297*Output*: print out all parsed data or `OUT_FILE` where `OUT_FILE` is a python map of the data or `OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv`, depending on input.
312 OUT_FILE where OUT_FILE is a python map of the data or 298
313 OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv 299The goal is to create csv files which record how varying PARAM changes the value of FIELD. Only PARAMs which vary are considered.
314 300
315 The goal is to create csv files which record how varying PARAM 301FIELD is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio'. `PARAM` is a parameter which we are going to vary, e.g. 'tasks' A single `LINE` is created for every configuration of parameters other than `PARAM`.
316 changes the value of FIELD. Only PARAMs which vary are 302
317 considered. 303`TYPE is the type of measurement, i.e. Max, Min, Avg, or Var[iance]. The two types are used to differentiate between measurements across tasks in a single taskset, and measurements across all tasksets. E.g. `miss-ratio/*/Max/Avg` is the maximum of all the average miss ratios for each task set, while `miss-ratio/*/Avg/Max` is the average of the maximum miss ratios for each task set.
318 304
319 FIELD is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio' 305*Defaults*: `OUT_DIR, OUT_FILE = parse-data`, `data_dir1 = .`
320 PARAM is a parameter which we are going to vary, e.g. 'tasks' 306
321 A single LINE is created for every configuration of parameters 307This script reads a directory or directories, parses the binary files inside for feather-trace or sched-trace data, then summarizes and organizes the results for output. The output can be to the console, to a python map, or to a directory tree of csvs (the default, ish). The python map (using `-m`) can be used for schedulability tests. The directory tree can be used to look at how changing parameters affects certain measurements.
322 other than PARAM.
323
324 TYPE is the type of measurement, i.e. Max, Min, Avg, or
325 Var[iance]. The two types are used to differentiate between
326 measurements across tasks in a single taskset, and
327 measurements across all tasksets. E.g. miss-ratio/*/Max/Avg
328 is the maximum of all the average miss ratios for each task set, while
329 miss-ratio/*/Avg/Max is the average of the maximum miss ratios
330 for each task set.
331
332Defaults: OUT_DIR or OUT_FILE = parse-data, data_dir1 = '.'
333
334The parse_exps.py script reads a directory or directories, parses the
335binary files inside for feather-trace or sched-trace data, then
336summarizes and organizes the results for output. The output can be to
337the console, to a python map, or to a directory tree of csvs (the
338default, ish). The python map (using -m) can be used for
339schedulability tests. The directory tree can be used to look at how
340changing parameters affects certain measurements.
341 308
342The script will use half the current computers CPUs to process data. 309The script will use half the current computers CPUs to process data.
343 310
344In the following example, too little data was found to create csv 311In the following example, too little data was found to create csv files, so the data is output to the console despite the user not specifying the `-v` option. This use is the easiest for quick overhead evalutation and debugging. Note that for overhead measurements like these, `parse_exps.py` will use the `clock-frequency` parameter saved in a params.py file by `run_exps.py` to calculate overhead measurements. If a param file is not present, as in this case, the current CPUs frequency will be used.
345files, so the data is output to the console despite the user not
346specifying the -v option. This use is the easiest for quick overhead
347evalutation and debugging. Note that for overhead measurements like
348these, parse_exps.py will use the 'clock-frequency' parameter saved in
349a params.py file by run_exps.py to calculate overhead measurements. If
350a param file is not present, as in this case, the current CPUs
351frequency will be used.
352 312
313```bash
353$ ls run-data/ 314$ ls run-data/
354taskset_scheduler=C-FL-split-L3_host=ludwig_n=10_idx=05_split=randsplit.ft 315taskset_scheduler=C-FL-split-L3_host=ludwig_n=10_idx=05_split=randsplit.ft
355$ parse_exps.py run-data/ 316$ parse_exps.py run-data/
@@ -362,10 +323,11 @@ Too little data to make csv files.
362 CXS: Avg: 5.053 Max: 59.925 Min: 0.241 323 CXS: Avg: 5.053 Max: 59.925 Min: 0.241
363 SCHED: Avg: 4.410 Max: 39.350 Min: 0.357 324 SCHED: Avg: 4.410 Max: 39.350 Min: 0.357
364 TICK: Avg: 1.812 Max: 21.380 Min: 0.241 325 TICK: Avg: 1.812 Max: 21.380 Min: 0.241
326```
365 327
366In the next example, because the value of num-tasks varies, csvs can 328In the next example, because the value of num-tasks varies, csvs can be created. The varying parameters used to create csvs were found by reading the `params.py` files under each `run-data` subdirectory.
367be created:
368 329
330```bash
369$ ls run-data/ 331$ ls run-data/
370sched=C-EDF_num-tasks=4/ sched=GSN-EDF_num-tasks=4/ 332sched=C-EDF_num-tasks=4/ sched=GSN-EDF_num-tasks=4/
371sched=C-EDF_num-tasks=8/ sched=GSN-EDF_num-tasks=8/ 333sched=C-EDF_num-tasks=8/ sched=GSN-EDF_num-tasks=8/
@@ -374,20 +336,13 @@ sched=C-EDF_num-tasks=16/ sched=GSN-EDF_num-tasks=16/
374$ parse_exps.py run-data/* 336$ parse_exps.py run-data/*
375$ ls parse-data/ 337$ ls parse-data/
376avg-block/ avg-tard/ max-block/ max-tard/ miss-ratio/ 338avg-block/ avg-tard/ max-block/ max-tard/ miss-ratio/
339```
377 340
378The varying parameters were found by reading the params.py files under 341You can use the `-v` option to print out the values measured even when csvs could be created.
379each run-data subdirectory.
380
381You can use the -v option to print out the values measured even when
382csvs could be created.
383 342
384You can use the -i option to ignore variations in a certain parameter 343You can use the `-i` option to ignore variations in a certain parameter (or parameters if a comma-seperated list is given). In the following example, the user has decided the parameter `option` does not matter after viewing output. Note that the `trial` parameter, used by `gen_exps.py` to create multiple schedules with the same configuration, is always ignored.
385(or parameters if a comma-seperated list is given). In the following
386example, the user has decided the 'option' does not matter after
387viewing output. Note that the 'trial' parameter, used by gen_exps.py
388to create multiple schedules with the same configuration, is always
389ignored.
390 344
345```bash
391$ ls run-data/ 346$ ls run-data/
392sched=C-EDF_num-tasks=4_option=1/ sched=C-EDF_num-tasks=4_option=2/ 347sched=C-EDF_num-tasks=4_option=1/ sched=C-EDF_num-tasks=4_option=2/
393sched=C-EDF_num-tasks=8_option=1/ sched=C-EDF_num-tasks=8_option=2/ 348sched=C-EDF_num-tasks=8_option=1/ sched=C-EDF_num-tasks=8_option=2/
@@ -407,69 +362,48 @@ $i; done
407line.csv 362line.csv
408 4 .2 363 4 .2
409 8 .3 364 8 .3
365```
410 366
411The second command will also have run faster than the first. This is 367The second command will also have run faster than the first. This is because `parse_exps.py` will save the data it parses in `tmp/` directories before it attempts to sort it into csvs. Parsing takes far longer than sorting, so this saves a lot of time. The `-f` flag can be used to re-parse files and overwrite this saved data.
412because parse_exps.py will save the data it parses in tmp/ directories
413before it attempts to sort it into csvs. Parsing takes far longer than
414sorting, so this saves a lot of time. The -f flag can be used to
415re-parse files and overwrite this saved data.
416
417All output from the feather-trace-tool programs used to parse data is
418stored in the tmp/ directories created in the input directories. If
419the sched_trace repo is found in the users PATH, st_show will be used
420to create a human-readable version of the sched-trace data which will
421also be stored there.
422
423
424V. PLOT_EXPS
425Usage: plot_exps.py [options] [csv_dir]...
426 where a csv dir is a directory or directory of directories (and
427 so on) containing csvs, like:
428 csv_dir/[subdirs/...]
429 line1.csv
430 line2.csv
431 line3.csv
432 368
433Outputs: OUT_DIR/[csv_dir/]*[plot]*.pdf 369All output from the *feather-trace-tools* programs used to parse data is stored in the `tmp/` directories created in the input directories. If the *sched_trace* repo is found in the users `PATH`, `st_show` will be used to create a human-readable version of the sched-trace data which will also be stored there.
434 where a single plot exists for each directory of csvs, with a
435 line for for each csv file in that directory. If only a
436 single csv_dir is specified, all plots are placed directly
437 under OUT_DIR.
438 370
439Defaults: OUT_DIR = 'plot-data/', csv_dir = '.' 371## plot_exps.py
372*Usage*: `plot_exps.py [options] [csv_dir]...`
440 373
441The plot_exps.py script takes directories of csvs (or directories 374where a csv dir is a directory or directory of directories (and so on) containing csvs, like:
442formatted as specified below) and creates a pdf plot of each csv 375```
443directory found. A line is created for each .csv file contained in a 376csv_dir/[subdirs/...]
444plot. Matplotlib is used to do the plotting. 377 line1.csv
378 line2.csv
379 line3.csv
380```
445 381
446If the csv files are formatted like: 382*Outputs*: `OUT_DIR/[csv_dir/]*[plot]*.pdf`
447 383
448 param=value_param2=value2.csv 384where a single plot exists for each directory of csvs, with a line for for each csv file in that directory. If only a single csv_dir is specified, all plots are placed directly under `OUT_DIR`.
449 385
450the variation of these parameters will be used to color the lines in 386*Defaults*: `OUT_DIR = plot-data/`, `csv_dir = .`
451the most readable way. For instance, if there are three parameters,
452variations in one parameter will change line color, another line
453style (dashes/dots/etc), and a third line markers
454(trianges/circles/etc).
455 387
456If a directory of directories is passed in, the script will assume the 388This script takes directories of csvs (or directories formatted as specified below) and creates a pdf plot of each csv directory found. A line is created for each .csv file contained in a plot. [Matplotlib][matplotlib] is used to do the plotting. The script will use half the current computers CPUs to process data.
457top level directory is the measured value and the next level is the
458variable, ie:
459 389
460 value/variable/[..../]line.csv 390If the csv filenames are formatted like: `param=value_param2=value2.csv`, the variation of these parameters will be used to color the lines in the most readable way. For instance, if there are three parameters, variations in one parameter will change line color, another line style (dashes/dots/etc), and a third line markers (trianges/circles/etc).
461 391
462And put a title on the plot of "Value by variable (...)". Otherwise, 392If a directory of directories is passed in, the script will assume the top level directory is the measured value and the next level is the variable, ie: `value/variable/[..../]line.csv`, and will put a title on the plot of "Value by variable (...)". Otherwise, the name of the top level directory will be the title, like "Value".
463the name of the top level directory will be the title, like "Value".
464 393
465A directory with some lines: 394A directory with some lines:
395
396```bash
466$ ls 397$ ls
467line1.csv line2.csv 398line1.csv line2.csv
468$ plot_exps.py 399$ plot_exps.py
469$ ls plot-data/ 400$ ls plot-data/
470plot.pdf 401plot.pdf
402```
471 403
472A directory with a few subdirectories: 404A directory with a few subdirectories:
405
406```bash
473$ ls test/ 407$ ls test/
474apples/ oranges/ 408apples/ oranges/
475$ ls test/apples/ 409$ ls test/apples/
@@ -477,8 +411,11 @@ line1.csv line2.csv
477$ plot_exps.py test/ 411$ plot_exps.py test/
478$ ls plot-data/ 412$ ls plot-data/
479apples.pdf oranges.pdf 413apples.pdf oranges.pdf
414```
480 415
481A directory with many subdirectories: 416A directory with many subdirectories:
417
418```bash
482$ ls parse-data 419$ ls parse-data
483avg-block/ avg-tard/ max-block/ max-tard/ miss-ratio/ 420avg-block/ avg-tard/ max-block/ max-tard/ miss-ratio/
484$ ls parse-data/avg-block/tasks/Avg/Avg 421$ ls parse-data/avg-block/tasks/Avg/Avg
@@ -490,10 +427,11 @@ avg-block_tasks_Max_Avg.pdf avg-block_tasks_Max_Max.pdf avg-block_tasks_Max_Min.
490avg-block_tasks_Min_Avg.pdf avg-block_tasks_Min_Max.pdf avg-block_tasks_Min_Min.pdf 427avg-block_tasks_Min_Avg.pdf avg-block_tasks_Min_Max.pdf avg-block_tasks_Min_Min.pdf
491avg-block_tasks_Var_Avg.pdf avg-block_tasks_Var_Max.pdf avg-block_tasks_Var_Min.pdf 428avg-block_tasks_Var_Avg.pdf avg-block_tasks_Var_Max.pdf avg-block_tasks_Var_Min.pdf
492....... 429.......
430```
493 431
494If you run the previous example directly on the subdirectories, 432If you run the previous example directly on the subdirectories, subdirectories will be created in the output:
495subdirectories will be created in the output:
496 433
434```bash
497$ plot_exps.py parse-data/* 435$ plot_exps.py parse-data/*
498$ ls plot-data/ 436$ ls plot-data/
499avg-block/ max-tard/ avg-tard/ miss-ratio/ max-block/ 437avg-block/ max-tard/ avg-tard/ miss-ratio/ max-block/
@@ -502,10 +440,9 @@ tasks_Avg_Avg.pdf tasks_Avg_Min.pdf tasks_Max_Max.pdf
502tasks_Min_Avg.pdf tasks_Min_Min.pdf tasks_Var_Max.pdf 440tasks_Min_Avg.pdf tasks_Min_Min.pdf tasks_Var_Max.pdf
503tasks_Avg_Max.pdf tasks_Max_Avg.pdf tasks_Max_Min.pdf 441tasks_Avg_Max.pdf tasks_Max_Avg.pdf tasks_Max_Min.pdf
504tasks_Min_Max.pdf tasks_Var_Avg.pdf tasks_Var_Min.pdf 442tasks_Min_Max.pdf tasks_Var_Avg.pdf tasks_Var_Min.pdf
443```
505 444
506However, when a single directory of directories is given, the script 445However, when a single directory of directories is given, the script assumes the experiments are related and can make line styles match in different plots and more effectively parallelize the plotting.
507assumes the experiments are related and can make line styles match in
508different plots and more effectively parallelize the plotting.
509 446
510[litmus]: https://github.com/LITMUS-RT/litmus-rt 447[litmus]: https://github.com/LITMUS-RT/litmus-rt
511[liblitmus]: https://github.com/LITMUS-RT/liblitmus 448[liblitmus]: https://github.com/LITMUS-RT/liblitmus