aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.md34
1 files changed, 17 insertions, 17 deletions
diff --git a/README.md b/README.md
index a9975d9..dd6b417 100644
--- a/README.md
+++ b/README.md
@@ -191,7 +191,7 @@ $ echo "{'uname': r'.*litmus.*'}" > params.py
191# run_exps.py will now succeed 191# run_exps.py will now succeed
192``` 192```
193 193
194The second property is kernel configuration options. These assume the configuration is stored at `/boot/config-$(uname -r)`. You can specify these in `params.py` like so: 194The second property is kernel configuration options. These assume the configuration is stored at `/boot/config-$(uname -r)`. You can specify these in `params.py`. In the following example, the experiment will only run on an ARM system with the release master enabled:
195 195
196```python 196```python
197{'config-options':{ 197{'config-options':{
@@ -204,13 +204,13 @@ The second property is kernel configuration options. These assume the configurat
204## gen_exps.py 204## gen_exps.py
205*Usage*: `gen_exps.py [options] [files...] [generators...] [param=val[,val]...]` 205*Usage*: `gen_exps.py [options] [files...] [generators...] [param=val[,val]...]`
206 206
207*Output*: OUT_DIR/EXP_DIRS which each contain sched.py and params.py 207*Output*: `OUT_DIR/EXP_DIRS` which each contain `sched.py` and `params.py`
208 208
209*Defaults*: `generators = G-EDF P-EDF C-EDF`, `OUT_DIR = exps/` 209*Defaults*: `generators = G-EDF P-EDF C-EDF`, `OUT_DIR = exps/`
210 210
211This script uses *generators*, one for each LITMUS scheduler supported, which each have different properties which can be varied to generate different types of schedules. Each of these properties has a default value which can be modified on the command line for quick and easy experiment generation. 211This script uses *generators*, one for each LITMUS scheduler supported, which each have different properties which can be varied to generate different types of schedules. Each of these properties has a default value which can be modified on the command line for quick and easy experiment generation.
212 212
213This script as written should be used to create debugging task sets, but not for creating task sets for experiments shown in papers. That is because the safety features of `run_exps.py` described above (uname, config-options) are not used here. If you are creating experiments for a paper, you should create your own generator which outputs values for the `config-options` required for your plugin so that you cannot ruin your experiments at run time. Trust me, you will. 213This script as written should be used to create debugging task sets, but not for creating task sets for experiments shown in papers. That is because the safety features of `run_exps.py` described above (`uname`, `config-options`) are not used here. If you are creating experiments for a paper, you should create your own generator which outputs values for the `config-options` required for your plugin so that you cannot ruin your experiments at run time. Trust me, you will.
214 214
215The `-l` option lists the supported generators which can be specified: 215The `-l` option lists the supported generators which can be specified:
216 216
@@ -289,22 +289,22 @@ sched=PSN-EDF_trial=0/ sched=PSN-EDF_trial=1/ sched=PSN-EDF_trial=2/
289sched=PSN-EDF_trial=3/ sched=PSN-EDF_trial=4/ 289sched=PSN-EDF_trial=3/ sched=PSN-EDF_trial=4/
290``` 290```
291 291
292IV. PARSE_EXPS 292## parse_exps.py
293*Usage*: `parse_exps.py [options] [data_dir1] [data_dir2]...` 293*Usage*: `parse_exps.py [options] [data_dir1] [data_dir2]...`
294 294
295where data_dirs contain feather-trace and sched-trace data, e.g. `ft.bin`, `mysched.ft`, or `st-*.bin`. 295where the `data_dirx` contain feather-trace and sched-trace data, e.g. `ft.bin`, `mysched.ft`, or `st-*.bin`.
296 296
297*Output*: print out all parsed data or `OUT_FILE` where `OUT_FILE` is a python map of the data or `OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv`, depending on input. 297*Output*: print out all parsed data or `OUT_FILE` where `OUT_FILE` is a python map of the data or `OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv`, depending on input.
298 298
299The goal is to create csv files which record how varying PARAM changes the value of FIELD. Only PARAMs which vary are considered. 299The goal is to create csv files which record how varying `PARAM` changes the value of `FIELD`. Only `PARAM`s which vary are considered.
300 300
301FIELD is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio'. `PARAM` is a parameter which we are going to vary, e.g. 'tasks' A single `LINE` is created for every configuration of parameters other than `PARAM`. 301`FIELD` is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio'. `PARAM` is a parameter which we are going to vary, e.g. 'tasks'. A single `LINE` is created for every configuration of parameters other than `PARAM`.
302 302
303`TYPE is the type of measurement, i.e. Max, Min, Avg, or Var[iance]. The two types are used to differentiate between measurements across tasks in a single taskset, and measurements across all tasksets. E.g. `miss-ratio/*/Max/Avg` is the maximum of all the average miss ratios for each task set, while `miss-ratio/*/Avg/Max` is the average of the maximum miss ratios for each task set. 303`TYPE` is the statistic of the measurement, i.e. Max, Min, Avg, or Var[iance]. The two types are used to differentiate between measurements across tasks in a single taskset, and measurements across all tasksets. E.g. `miss-ratio/*/Max/Avg` is the maximum of all the average miss ratios for each task set, while `miss-ratio/*/Avg/Max` is the average of the maximum miss ratios for each task set.
304 304
305*Defaults*: `OUT_DIR, OUT_FILE = parse-data`, `data_dir1 = .` 305*Defaults*: `OUT_DIR, OUT_FILE = parse-data`, `data_dir1 = .`
306 306
307This script reads a directory or directories, parses the binary files inside for feather-trace or sched-trace data, then summarizes and organizes the results for output. The output can be to the console, to a python map, or to a directory tree of csvs (the default, ish). The python map (using `-m`) can be used for schedulability tests. The directory tree can be used to look at how changing parameters affects certain measurements. 307This script reads a directory or directories, parses the binary files inside for feather-trace or sched-trace data, then summarizes and organizes the results for output. The output can be to the console, to a python map, or to a directory tree of csvs (default). The python map (using `-m`) can be used for schedulability tests. The directory tree can be used to look at how changing parameters affects certain measurements.
308 308
309The script will use half the current computers CPUs to process data. 309The script will use half the current computers CPUs to process data.
310 310
@@ -313,13 +313,13 @@ In the following example, too little data was found to create csv files, so the
313```bash 313```bash
314$ ls run-data/ 314$ ls run-data/
315taskset_scheduler=C-FL-split-L3_host=ludwig_n=10_idx=05_split=randsplit.ft 315taskset_scheduler=C-FL-split-L3_host=ludwig_n=10_idx=05_split=randsplit.ft
316$ parse_exps.py run-data/ 316$ parse_exps.py
317Loading experiments... 317Loading experiments...
318Parsing data... 318Parsing data...
319 0.00% 319 0.00%
320Writing result... 320Writing result...
321Too little data to make csv files. 321Too little data to make csv files.
322<ExpPoint-/home/hermanjl/tmp/run-data> 322<ExpPoint-/home/hermanjl/tmp>
323 CXS: Avg: 5.053 Max: 59.925 Min: 0.241 323 CXS: Avg: 5.053 Max: 59.925 Min: 0.241
324 SCHED: Avg: 4.410 Max: 39.350 Min: 0.357 324 SCHED: Avg: 4.410 Max: 39.350 Min: 0.357
325 TICK: Avg: 1.812 Max: 21.380 Min: 0.241 325 TICK: Avg: 1.812 Max: 21.380 Min: 0.241
@@ -369,21 +369,21 @@ The second command will also have run faster than the first. This is because `pa
369All output from the *feather-trace-tools* programs used to parse data is stored in the `tmp/` directories created in the input directories. If the *sched_trace* repo is found in the users `PATH`, `st_show` will be used to create a human-readable version of the sched-trace data which will also be stored there. 369All output from the *feather-trace-tools* programs used to parse data is stored in the `tmp/` directories created in the input directories. If the *sched_trace* repo is found in the users `PATH`, `st_show` will be used to create a human-readable version of the sched-trace data which will also be stored there.
370 370
371## plot_exps.py 371## plot_exps.py
372*Usage*: `plot_exps.py [options] [csv_dir]...` 372*Usage*: `plot_exps.py [OPTIONS] [CSV_DIR]...`
373 373
374where a csv dir is a directory or directory of directories (and so on) containing csvs, like: 374where a `CSV_DIR` is a directory or directory of directories (and so on) containing csvs, like:
375``` 375```
376csv_dir/[subdirs/...] 376CSV_DIR/[SUBDIR/...]
377 line1.csv 377 line1.csv
378 line2.csv 378 line2.csv
379 line3.csv 379 line3.csv
380``` 380```
381 381
382*Outputs*: `OUT_DIR/[csv_dir/]*[plot]*.pdf` 382*Outputs*: `OUT_DIR/[CSV_DIR/]*[PLOT]*.pdf`
383 383
384where a single plot exists for each directory of csvs, with a line for for each csv file in that directory. If only a single csv_dir is specified, all plots are placed directly under `OUT_DIR`. 384where a single plot exists for each directory of csvs, with a line for for each csv file in that directory. If only a single `CSV_DIR` is specified, all plots are placed directly under `OUT_DIR`.
385 385
386*Defaults*: `OUT_DIR = plot-data/`, `csv_dir = .` 386*Defaults*: `OUT_DIR = plot-data/`, `CSV_DIR = .`
387 387
388This script takes directories of csvs (or directories formatted as specified below) and creates a pdf plot of each csv directory found. A line is created for each .csv file contained in a plot. [Matplotlib][matplotlib] is used to do the plotting. The script will use half the current computers CPUs to process data. 388This script takes directories of csvs (or directories formatted as specified below) and creates a pdf plot of each csv directory found. A line is created for each .csv file contained in a plot. [Matplotlib][matplotlib] is used to do the plotting. The script will use half the current computers CPUs to process data.
389 389