aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.md26
-rw-r--r--common.py17
-rw-r--r--config/config.py13
-rw-r--r--gen/edf_generators.py24
-rw-r--r--gen/generator.py24
-rw-r--r--gen/mc_generators.py82
-rwxr-xr-xgen_exps.py11
-rw-r--r--parse/point.py4
-rw-r--r--parse/sched.py354
-rw-r--r--parse/tuple_table.py6
-rwxr-xr-xparse_exps.py199
-rwxr-xr-xplot_exps.py21
-rw-r--r--run/executable/executable.py8
-rw-r--r--run/experiment.py185
-rw-r--r--run/litmus_util.py13
-rw-r--r--run/tracer.py4
-rwxr-xr-xrun_exps.py218
17 files changed, 759 insertions, 450 deletions
diff --git a/README.md b/README.md
index 6200277..b074aa5 100644
--- a/README.md
+++ b/README.md
@@ -45,7 +45,7 @@ gen_exps.py --> [exps/*] --> run_exps.py --> [run-data/*] --.
453. Parse binary data in `run-data/` using `parse_exps.py`, generating csv files in `parse-data/`. 453. Parse binary data in `run-data/` using `parse_exps.py`, generating csv files in `parse-data/`.
464. Plot `parse-data` using `plot_exps.py`, generating pdfs in `plot-data/`. 464. Plot `parse-data` using `plot_exps.py`, generating pdfs in `plot-data/`.
47 47
48Each of these scripts will be described. The `run_exps.py` script is first because `gen_exps.py` creates schedule files which depend on `run_exps.py`. 48Each of these scripts will be described. The `run_exps.py` script is first because `gen_exps.py` creates schedule files that depend on `run_exps.py`.
49 49
50 50
51## run_exps.py 51## run_exps.py
@@ -74,7 +74,7 @@ OUT_DIR/[SCHED_(FILE|DIR)/]
74 74
75*Defaults*: `SCHED_FILE = sched.py`, `PARAM_FILE = params.py`, `DURATION = 30`, `OUT_DIR = run-data/` 75*Defaults*: `SCHED_FILE = sched.py`, `PARAM_FILE = params.py`, `DURATION = 30`, `OUT_DIR = run-data/`
76 76
77This script reads *schedule files* (described below) and executes real-time task systems, recording all overhead, logging, and trace data which is enabled in the system (unless a specific set of tracers is specified in the parameter file, see below). For example, if trace logging is enabled, rt-kernelshark is found in the path, but feather-trace is disabled (the devices are not present), only trace logs and rt-kernelshark logs will be recorded. 77This script reads *schedule files* (described below) and executes real-time task systems, recording all overhead, logging, and trace data that is enabled in the system (unless a specific set of tracers is specified in the parameter file, see below). For example, if trace logging is enabled, rt-kernelshark is found in the path, but feather-trace is disabled (the devices are not present), only trace logs and rt-kernelshark logs will be recorded.
78 78
79When `run_exps.py` is running a schedule file, temporary data is saved in a `tmp` directory in the same directory as the schedule file. When execution completes, this data is moved into a directory under the `run_exps.py` output directory (default: `run-data/`, can be changed with the `-o` option). When multiple schedules are run, each schedule's data is saved in a unique directory under the output directory. 79When `run_exps.py` is running a schedule file, temporary data is saved in a `tmp` directory in the same directory as the schedule file. When execution completes, this data is moved into a directory under the `run_exps.py` output directory (default: `run-data/`, can be changed with the `-o` option). When multiple schedules are run, each schedule's data is saved in a unique directory under the output directory.
80 80
@@ -192,7 +192,7 @@ $ cat post-out.txt
192Experiment ends! 192Experiment ends!
193``` 193```
194 194
195Finally, you can specify system properties in `params.py` which the environment must match for the experiment to run. These are useful if you have a large batch of experiments which must be run under different kernels or kernel configurations. The first property is a regular expression for the name of the kernel: 195Finally, you can specify system properties in `params.py`, which the environment must match for the experiment to run. These are useful if you have a large batch of experiments that must be run under different kernels or kernel configurations. The first property is a regular expression for the name of the kernel:
196 196
197```bash 197```bash
198$ uname -r 198$ uname -r
@@ -219,7 +219,7 @@ The second property is kernel configuration options. These assume the configurat
219} 219}
220``` 220```
221 221
222The third property is required tracers. The `tracers` property lets the user specify only those tracers they want to run with an experiment, as opposed to starting every available tracer (the default). If any of these specified tracers cannot be enabled, e.g. the kernel was not compiled with feather-trace support, the experiment will not run. The following example gives an experiment which will not run unless all four tracers are enabled: 222The third property is required tracers. The `tracers` property lets the user specify only those tracers they want to run with an experiment, as opposed to starting every available tracer (the default). If any of these specified tracers cannot be enabled, e.g. the kernel was not compiled with feather-trace support, the experiment will not run. The following example gives an experiment that will not run unless all four tracers are enabled:
223```python 223```python
224{'tracers':['kernelshark', 'log', 'sched', 'overhead']} 224{'tracers':['kernelshark', 'log', 'sched', 'overhead']}
225``` 225```
@@ -227,15 +227,15 @@ The third property is required tracers. The `tracers` property lets the user spe
227## gen_exps.py 227## gen_exps.py
228*Usage*: `gen_exps.py [options] [files...] [generators...] [param=val[,val]...]` 228*Usage*: `gen_exps.py [options] [files...] [generators...] [param=val[,val]...]`
229 229
230*Output*: `OUT_DIR/EXP_DIRS` which each contain `sched.py` and `params.py` 230*Output*: `OUT_DIR/EXP_DIRS` that each contain `sched.py` and `params.py`
231 231
232*Defaults*: `generators = G-EDF P-EDF C-EDF`, `OUT_DIR = exps/` 232*Defaults*: `generators = G-EDF P-EDF C-EDF`, `OUT_DIR = exps/`
233 233
234This script uses *generators*, one for each LITMUS scheduler supported, which each have different properties which can be varied to generate different types of schedules. Each of these properties has a default value which can be modified on the command line for quick and easy experiment generation. 234This script uses *generators*, one for each LITMUS scheduler supported, which each have different properties that can be varied to generate different types of schedules. Each of these properties has a default value that can be modified on the command line for quick and easy experiment generation.
235 235
236This script as written should be used to create debugging task sets, but not for creating task sets for experiments shown in papers. That is because the safety features of `run_exps.py` described above (`uname`, `config-options`) are not used here. If you are creating experiments for a paper, you should create your own generator which outputs values for the `config-options` required for your plugin so that you cannot ruin your experiments at run time. Trust me, you will. 236This script as written should be used to create debugging task sets, but not for creating task sets for experiments shown in papers. That is because the safety features of `run_exps.py` described above (`uname`, `config-options`) are not used here. If you are creating experiments for a paper, you should create your own generator that outputs values for the `config-options` required for your plugin so that you cannot ruin your experiments at run time. Trust me, you will.
237 237
238The `-l` option lists the supported generators which can be specified: 238The `-l` option lists the supported generators that can be specified:
239 239
240```bash 240```bash
241$ gen_exps.py -l 241$ gen_exps.py -l
@@ -287,7 +287,7 @@ sched=PSN-EDF_num-tasks=24/ sched=PSN-EDF_num-tasks=26/
287sched=PSN-EDF_num-tasks=28/ sched=PSN-EDF_num-tasks=30/ 287sched=PSN-EDF_num-tasks=28/ sched=PSN-EDF_num-tasks=30/
288``` 288```
289 289
290The generator will create a different directory for each possible configuration of the parameters. Each parameter which is varied is included in the name of the schedule directory. For example, to vary the number of CPUs but not the number of tasks: 290The generator will create a different directory for each possible configuration of the parameters. Each parameter that is varied is included in the name of the schedule directory. For example, to vary the number of CPUs but not the number of tasks:
291 291
292```bash 292```bash
293$ gen_exps.py -f tasks=24 cpus=3,6 P-EDF 293$ gen_exps.py -f tasks=24 cpus=3,6 P-EDF
@@ -319,9 +319,9 @@ where the `data_dirx` contain feather-trace and sched-trace data, e.g. `ft.bin`,
319 319
320*Output*: print out all parsed data or `OUT_FILE` where `OUT_FILE` is a python map of the data or `OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv`, depending on input. 320*Output*: print out all parsed data or `OUT_FILE` where `OUT_FILE` is a python map of the data or `OUT_DIR/[FIELD]*/[PARAM]/[TYPE]/[TYPE]/[LINE].csv`, depending on input.
321 321
322The goal is to create csv files which record how varying `PARAM` changes the value of `FIELD`. Only `PARAM`s which vary are considered. 322The goal is to create csv files that record how varying `PARAM` changes the value of `FIELD`. Only `PARAM`s that vary are considered.
323 323
324`FIELD` is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio'. `PARAM` is a parameter which we are going to vary, e.g. 'tasks'. A single `LINE` is created for every configuration of parameters other than `PARAM`. 324`FIELD` is a parsed value, e.g. 'RELEASE' overhead or 'miss-ratio'. `PARAM` is a parameter that we are going to vary, e.g. 'tasks'. A single `LINE` is created for every configuration of parameters other than `PARAM`.
325 325
326`TYPE` is the statistic of the measurement, i.e. Max, Min, Avg, or Var[iance]. The two types are used to differentiate between measurements across tasks in a single taskset, and measurements across all tasksets. E.g. `miss-ratio/*/Max/Avg` is the maximum of all the average miss ratios for each task set, while `miss-ratio/*/Avg/Max` is the average of the maximum miss ratios for each task set. 326`TYPE` is the statistic of the measurement, i.e. Max, Min, Avg, or Var[iance]. The two types are used to differentiate between measurements across tasks in a single taskset, and measurements across all tasksets. E.g. `miss-ratio/*/Max/Avg` is the maximum of all the average miss ratios for each task set, while `miss-ratio/*/Avg/Max` is the average of the maximum miss ratios for each task set.
327 327
@@ -389,7 +389,7 @@ line.csv
389 389
390The second command will also have run faster than the first. This is because `parse_exps.py` will save the data it parses in `tmp/` directories before it attempts to sort it into csvs. Parsing takes far longer than sorting, so this saves a lot of time. The `-f` flag can be used to re-parse files and overwrite this saved data. 390The second command will also have run faster than the first. This is because `parse_exps.py` will save the data it parses in `tmp/` directories before it attempts to sort it into csvs. Parsing takes far longer than sorting, so this saves a lot of time. The `-f` flag can be used to re-parse files and overwrite this saved data.
391 391
392All output from the *feather-trace-tools* programs used to parse data is stored in the `tmp/` directories created in the input directories. If the *sched_trace* repo is found in the users `PATH`, `st_show` will be used to create a human-readable version of the sched-trace data which will also be stored there. 392All output from the *feather-trace-tools* programs used to parse data is stored in the `tmp/` directories created in the input directories. If the *sched_trace* repo is found in the users `PATH`, `st_show` will be used to create a human-readable version of the sched-trace data that will also be stored there.
393 393
394## plot_exps.py 394## plot_exps.py
395*Usage*: `plot_exps.py [OPTIONS] [CSV_DIR]...` 395*Usage*: `plot_exps.py [OPTIONS] [CSV_DIR]...`
@@ -472,4 +472,4 @@ However, when a single directory of directories is given, the script assumes the
472[rt-kernelshark]: https://github.com/LITMUS-RT/rt-kernelshark 472[rt-kernelshark]: https://github.com/LITMUS-RT/rt-kernelshark
473[feather-trace-tools]: https://github.com/LITMUS-RT/feather-trace-tools 473[feather-trace-tools]: https://github.com/LITMUS-RT/feather-trace-tools
474[rtunc]: http://www.cs.unc.edu/~anderson/real-time/ 474[rtunc]: http://www.cs.unc.edu/~anderson/real-time/
475[matplotlib]: http://matplotlib.org/ \ No newline at end of file 475[matplotlib]: http://matplotlib.org/
diff --git a/common.py b/common.py
index a2c6224..7abf0f2 100644
--- a/common.py
+++ b/common.py
@@ -193,3 +193,20 @@ def is_device(dev):
193 return False 193 return False
194 mode = os.stat(dev)[stat.ST_MODE] 194 mode = os.stat(dev)[stat.ST_MODE]
195 return not (not mode & stat.S_IFCHR) 195 return not (not mode & stat.S_IFCHR)
196
197__logged = []
198
199def set_logged_list(logged):
200 global __logged
201 __logged = logged
202
203def log_once(id, msg = None, indent = True):
204 global __logged
205
206 msg = msg if msg else id
207
208 if id not in __logged:
209 __logged += [id]
210 if indent:
211 msg = ' ' + msg.strip('\t').replace('\n', '\n\t')
212 sys.stderr.write('\n' + msg.strip('\n') + '\n')
diff --git a/config/config.py b/config/config.py
index cbac6b2..5e6f9e3 100644
--- a/config/config.py
+++ b/config/config.py
@@ -9,7 +9,7 @@ BINS = {'rtspin' : get_executable_hint('rtspin', 'liblitmus'),
9 'ftsplit' : get_executable_hint('ft2csv', 'feather-trace-tools'), 9 'ftsplit' : get_executable_hint('ft2csv', 'feather-trace-tools'),
10 'ftsort' : get_executable_hint('ftsort', 'feather-trace-tools'), 10 'ftsort' : get_executable_hint('ftsort', 'feather-trace-tools'),
11 'st_trace' : get_executable_hint('st_trace', 'feather-trace-tools'), 11 'st_trace' : get_executable_hint('st_trace', 'feather-trace-tools'),
12 # Option, as not everyone uses kernelshark yet 12 # Optional, as not everyone uses kernelshark yet
13 'trace-cmd' : get_executable_hint('trace-cmd', 'rt-kernelshark', True), 13 'trace-cmd' : get_executable_hint('trace-cmd', 'rt-kernelshark', True),
14 # Optional, as sched_trace is not a publically supported repository 14 # Optional, as sched_trace is not a publically supported repository
15 'st_show' : get_executable_hint('st_show', 'sched_trace', True)} 15 'st_show' : get_executable_hint('st_show', 'sched_trace', True)}
@@ -39,15 +39,24 @@ DEFAULTS = {'params_file' : 'params.py',
39 'sched_file' : 'sched.py', 39 'sched_file' : 'sched.py',
40 'duration' : 10, 40 'duration' : 10,
41 'prog' : 'rtspin', 41 'prog' : 'rtspin',
42 'out-gen' : 'exps',
43 'out-run' : 'run-data',
44 'out-parse' : 'parse-data',
45 'out-plot' : 'plot-data',
42 'cycles' : ft_freq() or 2000} 46 'cycles' : ft_freq() or 2000}
43 47
48
44'''Default sched_trace events (this is all of them).''' 49'''Default sched_trace events (this is all of them).'''
45SCHED_EVENTS = range(501, 513) 50SCHED_EVENTS = range(501, 513)
46 51
47'''Overhead events.''' 52'''Overhead events.'''
48OVH_BASE_EVENTS = ['SCHED', 'RELEASE', 'SCHED2', 'TICK', 'CXS'] 53OVH_BASE_EVENTS = ['SCHED', 'RELEASE', 'SCHED2', 'TICK', 'CXS', 'LOCK', 'UNLOCK']
49OVH_ALL_EVENTS = ["%s_%s" % (e, t) for (e,t) in 54OVH_ALL_EVENTS = ["%s_%s" % (e, t) for (e,t) in
50 itertools.product(OVH_BASE_EVENTS, ["START","END"])] 55 itertools.product(OVH_BASE_EVENTS, ["START","END"])]
51OVH_ALL_EVENTS += ['RELEASE_LATENCY'] 56OVH_ALL_EVENTS += ['RELEASE_LATENCY']
52# This event doesn't have a START and END 57# This event doesn't have a START and END
53OVH_BASE_EVENTS += ['RELEASE_LATENCY'] 58OVH_BASE_EVENTS += ['RELEASE_LATENCY']
59
60# If a task is missing more than this many records, its measurements
61# are not included in sched_trace summaries
62MAX_RECORD_LOSS = .2
diff --git a/gen/edf_generators.py b/gen/edf_generators.py
index 3f05b76..a722c21 100644
--- a/gen/edf_generators.py
+++ b/gen/edf_generators.py
@@ -16,10 +16,10 @@ class EdfGenerator(gen.Generator):
16 16
17 def __make_options(self): 17 def __make_options(self):
18 '''Return generic EDF options.''' 18 '''Return generic EDF options.'''
19 return [gen.Generator._dist_option('utils', ['uni-medium'], 19 return [gen.Generator._dist_option('utils', 'uni-medium',
20 gen.NAMED_UTILIZATIONS, 20 gen.NAMED_UTILIZATIONS,
21 'Task utilization distributions.'), 21 'Task utilization distributions.'),
22 gen.Generator._dist_option('periods', ['harmonic'], 22 gen.Generator._dist_option('periods', 'harmonic',
23 gen.NAMED_PERIODS, 23 gen.NAMED_PERIODS,
24 'Task period distributions.')] 24 'Task period distributions.')]
25 25
@@ -50,10 +50,22 @@ class PartitionedGenerator(EdfGenerator):
50 templates + [TP_PART_TASK], options, params) 50 templates + [TP_PART_TASK], options, params)
51 51
52 def _customize(self, taskset, exp_params): 52 def _customize(self, taskset, exp_params):
53 start = 1 if exp_params['release_master'] else 0 53 cpus = exp_params['cpus']
54 # Random partition for now: could do a smart partitioning 54 start = 0
55 if exp_params['release_master']:
56 cpus -= 1
57 start = 1
58
59 # Partition using worst-fit for most even distribution
60 utils = [0]*cpus
61 tasks = [0]*cpus
55 for t in taskset: 62 for t in taskset:
56 t.cpu = random.randint(start, exp_params['cpus'] - 1) 63 t.cpu = utils.index(min(utils))
64 utils[t.cpu] += t.utilization()
65 tasks[t.cpu] += 1
66
67 # Increment by one so release master has no tasks
68 t.cpu += start
57 69
58class PedfGenerator(PartitionedGenerator): 70class PedfGenerator(PartitionedGenerator):
59 def __init__(self, params={}): 71 def __init__(self, params={}):
@@ -61,7 +73,7 @@ class PedfGenerator(PartitionedGenerator):
61 73
62class CedfGenerator(PartitionedGenerator): 74class CedfGenerator(PartitionedGenerator):
63 TP_CLUSTER = "plugins/C-EDF/cluster{$level}" 75 TP_CLUSTER = "plugins/C-EDF/cluster{$level}"
64 CLUSTER_OPTION = gen.GenOption('level', ['L2', 'L3', 'All'], ['L2'], 76 CLUSTER_OPTION = gen.GenOption('level', ['L2', 'L3', 'All'], 'L2',
65 'Cache clustering level.',) 77 'Cache clustering level.',)
66 78
67 def __init__(self, params={}): 79 def __init__(self, params={}):
diff --git a/gen/generator.py b/gen/generator.py
index 693e52f..bc86cfe 100644
--- a/gen/generator.py
+++ b/gen/generator.py
@@ -5,7 +5,7 @@ import schedcat.generator.tasks as tasks
5import shutil as sh 5import shutil as sh
6 6
7from Cheetah.Template import Template 7from Cheetah.Template import Template
8from common import get_config_option,num_cpus,recordtype 8from common import get_config_option,num_cpus,recordtype,log_once
9from config.config import DEFAULTS,PARAMS 9from config.config import DEFAULTS,PARAMS
10from gen.dp import DesignPointGenerator 10from gen.dp import DesignPointGenerator
11from parse.col_map import ColMapBuilder 11from parse.col_map import ColMapBuilder
@@ -69,11 +69,10 @@ class Generator(object):
69 else: 69 else:
70 self.cpus = num_cpus() 70 self.cpus = num_cpus()
71 try: 71 try:
72 config = get_config_option("RELEASE_MASTER") and True 72 rm_config = get_config_option("RELEASE_MASTER") and True
73 except: 73 except:
74 config = False 74 rm_config = False
75 self.release_master = list(set([False, config])) 75 self.release_master = list(set([False, bool(rm_config)]))
76
77 76
78 def __make_options(self, params): 77 def __make_options(self, params):
79 '''Return generic Litmus options.''' 78 '''Return generic Litmus options.'''
@@ -116,11 +115,11 @@ class Generator(object):
116 ts = tg.make_task_set(max_tasks = params['tasks'], max_util=max_util) 115 ts = tg.make_task_set(max_tasks = params['tasks'], max_util=max_util)
117 tries += 1 116 tries += 1
118 if len(ts) != params['tasks']: 117 if len(ts) != params['tasks']:
119 print(("Only created task set of size %d < %d for params %s. " + 118 log_once("only", ("Only created task set of size %d < %d for " +
120 "Switching to light utilization.") % 119 "params %s. Switching to light utilization.") %
121 (len(ts), params['tasks'], params)) 120 (len(ts), params['tasks'], params))
122 print("Switching to light util. This usually means the " + 121 log_once("light", "Switching to light util. This usually means " +
123 "utilization distribution is too agressive.") 122 "the utilization distribution is too agressive.")
124 return self._create_taskset(params, periods, NAMED_UTILIZATIONS['uni-light'], 123 return self._create_taskset(params, periods, NAMED_UTILIZATIONS['uni-light'],
125 max_util) 124 max_util)
126 return ts 125 return ts
@@ -156,7 +155,10 @@ class Generator(object):
156 '''Set default parameter values and check that values are valid.''' 155 '''Set default parameter values and check that values are valid.'''
157 for option in self.options: 156 for option in self.options:
158 if option.name not in params: 157 if option.name not in params:
159 params[option.name] = option.default 158 val = option.default
159 val = val if type(val) == type([]) else [val]
160
161 params[option.name] = val
160 else: 162 else:
161 option.hidden = True 163 option.hidden = True
162 params[option.name] = self._check_value(option.name, 164 params[option.name] = self._check_value(option.name,
diff --git a/gen/mc_generators.py b/gen/mc_generators.py
index 704bcc3..8f5bd84 100644
--- a/gen/mc_generators.py
+++ b/gen/mc_generators.py
@@ -1,8 +1,10 @@
1import gen.rv as rv 1import gen.rv as rv
2 2
3from color import get_cache_info,CacheInfo,BlockColorScheme,RandomColorScheme,EvilColorScheme 3from color import get_cache_info,CacheInfo,BlockColorScheme,RandomColorScheme,EvilColorScheme
4from common import try_get_config_option 4from common import try_get_config_option, log_once
5from gen.generator import GenOption,Generator,NAMED_UTILIZATIONS,NAMED_PERIODS 5from gen.generator import GenOption,Generator,NAMED_UTILIZATIONS,NAMED_PERIODS
6from parse.col_map import ColMap
7from parse.tuple_table import TupleTable
6 8
7 9
8NAMED_SHARES = { 10NAMED_SHARES = {
@@ -188,24 +190,24 @@ class McGenerator(Generator):
188 190
189 return self._create_taskset(params, periods, utils, max_util) 191 return self._create_taskset(params, periods, utils, max_util)
190 192
191 def _customize(self, task_system, params): 193 def _get_tasks(self, params):
192 pass 194 return {'lvla': self.__create_lvla_sched(params),
195 'lvlb': self.__create_lvlb_sched(params),
196 'lvlc': self.__create_lvlc_sched(params)}
193 197
194 def _create_exp(self, params): 198 def _create_exp(self, params):
195 # Ugly way of doing it 199 # Ugly way of doing it
196 self.shares = self._create_dist('shares', params['shares'], 200 self.shares = self._create_dist('shares', params['shares'],
197 NAMED_SHARES) 201 NAMED_SHARES)
198 202
199 tasks = {'lvla': self.__create_lvla_sched(params), 203 tasks = self._get_tasks(params)
200 'lvlb': self.__create_lvlb_sched(params),
201 'lvlc': self.__create_lvlc_sched(params)}
202 204
203 conf_options = {MC_OPT : 'y'} 205 conf_options = {MC_OPT : 'y'}
204 if params['timer_merging']: 206 if params['timer_merging']:
205 conf_options[TM_OPT] = 'y' 207 conf_options[TM_OPT] = 'y'
206 if params['redirect']: 208 if params['redirect']:
207 if not params['release_master']: 209 if not params['release_master']:
208 print("Forcing release master option to enable redirection.") 210 log_once("Forcing release master option to enable redirection.")
209 params['release_master'] = 'y' 211 params['release_master'] = 'y'
210 conf_options[RD_OPT] = 'y' 212 conf_options[RD_OPT] = 'y'
211 if params['slack_stealing']: 213 if params['slack_stealing']:
@@ -241,8 +243,11 @@ TP_TYPE = """#if $type != 'unmanaged'
241/proc/sys/litmus/color/preempt_cache{0} 243/proc/sys/litmus/color/preempt_cache{0}
242#end if""" 244#end if"""
243 245
246# Always add some pages
247TP_ADD = """/proc/sys/litmus/color/add_pages{1}"""
248
244# Use special spin for color tasks 249# Use special spin for color tasks
245TP_COLOR_BASE = """colorspin -y $t.id -x $t.colorcsv """ 250TP_COLOR_BASE = """colorspin -y $t.id -x $t.colorcsv -q $t.wss -l $t.loops """
246 251
247TP_COLOR_B = TP_BASE.format("b", TP_COLOR_BASE + "-p $t.cpu ") 252TP_COLOR_B = TP_BASE.format("b", TP_COLOR_BASE + "-p $t.cpu ")
248TP_COLOR_C = TP_BASE.format("c", TP_COLOR_BASE) 253TP_COLOR_C = TP_BASE.format("c", TP_COLOR_BASE)
@@ -255,12 +260,16 @@ TP_CHUNK = """#if $chunk_size > 0
255COLOR_TYPES = ['scheduling', 'locking', 'unmanaged'] 260COLOR_TYPES = ['scheduling', 'locking', 'unmanaged']
256 261
257class ColorMcGenerator(McGenerator): 262class ColorMcGenerator(McGenerator):
263 __SINGLE_PAGE_LOOP_MS = {'ringo': .023}
264
258 def __init__(self, params = {}): 265 def __init__(self, params = {}):
259 super(ColorMcGenerator, self).__init__("MC", 266 super(ColorMcGenerator, self).__init__("MC",
260 templates=[TP_TYPE, TP_CHUNK, TP_COLOR_B, TP_COLOR_C], 267 templates=[TP_ADD, TP_TYPE, TP_CHUNK, TP_COLOR_B, TP_COLOR_C],
261 options=self.__make_options(), 268 options=self.__make_options(),
262 params=self.__extend_params(params)) 269 params=self.__extend_params(params))
263 270
271 self.tasksets = None
272
264 def __extend_params(self, params): 273 def __extend_params(self, params):
265 '''Add in fixed mixed-criticality parameters.''' 274 '''Add in fixed mixed-criticality parameters.'''
266 params['levels'] = 2 275 params['levels'] = 2
@@ -278,6 +287,10 @@ class ColorMcGenerator(McGenerator):
278 287
279 return params 288 return params
280 289
290 def __get_system_name(self):
291 import socket
292 return socket.gethostname().split(".")[0]
293
281 def __make_system_info(self): 294 def __make_system_info(self):
282 info = get_cache_info() 295 info = get_cache_info()
283 296
@@ -298,20 +311,32 @@ class ColorMcGenerator(McGenerator):
298 info = CacheInfo(cache, line=line, page=page, 311 info = CacheInfo(cache, line=line, page=page,
299 ways=ways, sets=sets, colors=colors) 312 ways=ways, sets=sets, colors=colors)
300 313
301 self.system = info 314 self.cache = info
315
316 hostname = self.__get_system_name()
317 if hostname not in self.__SINGLE_PAGE_LOOP_MS:
318 first_host = self.__SINGLE_PAGE_LOOP_MS.keys()[0]
319 log_once("hostname", "No timing info for host %s" % hostname +
320 ", needed to calculate work done per task. Please get the "
321 "timing info and add to __SINGLE_PAGE_LOOP_MS in " +
322 "mc_generators.py. Assuming host %s." % first_host)
323 hostname = first_host
324 self.host = hostname
302 325
303 def __make_options(self): 326 def __make_options(self):
304 self.__make_system_info() 327 self.__make_system_info()
305 328
306 return [GenOption('type', COLOR_TYPES, COLOR_TYPES, 329 return [GenOption('type', COLOR_TYPES, COLOR_TYPES,
307 'Cache management type.'), 330 'Cache management type.'),
308 GenOption('chunk_size', float, [0], 'Chunk size.'), 331 GenOption('host', self.__SINGLE_PAGE_LOOP_MS.keys(), self.host,
309 GenOption('ways', int, [self.system.ways], 'Ways (associativity).'), 332 'System experiment will run on (for calculating work).'),
310 GenOption('colors', int, [self.system.colors], 333 GenOption('chunk_size_ns', float, 0, 'Chunk size. 0 = no chunking.'),
334 GenOption('ways', int, self.cache.ways, 'Ways (associativity).'),
335 GenOption('colors', int, self.cache.colors,
311 'System colors (cache size / ways).'), 336 'System colors (cache size / ways).'),
312 GenOption('page_size', int, [self.system.page], 337 GenOption('page_size', int, self.cache.page,
313 'System page size.'), 338 'System page size.'),
314 GenOption('wss', [float, int], [.5], 339 GenOption('wss', [float, int], .5,
315 'Task working set sizes. Can be expressed as a fraction ' + 340 'Task working set sizes. Can be expressed as a fraction ' +
316 'of the cache.')] 341 'of the cache.')]
317 342
@@ -346,14 +371,37 @@ class ColorMcGenerator(McGenerator):
346 for color, replicas in task.colors.iteritems(): 371 for color, replicas in task.colors.iteritems():
347 f.write("%d, %d\n" % (color, replicas)) 372 f.write("%d, %d\n" % (color, replicas))
348 373
374 def __get_loops(self, task, pages, system):
375 all_pages_loop = self.__SINGLE_PAGE_LOOP_MS[system] * pages
376 return int(task.cost / all_pages_loop) + 1
377
378 def _get_tasks(self, params):
379 # Share tasksets amongst experiments with different types but
380 # identical other parameters for proper comparisons
381 if self.tasksets == None:
382 fields = params.keys()
383 fields.remove("type")
384 self.tasksets = TupleTable( ColMap(fields), lambda:None )
385
386 if params not in self.tasksets:
387 ts = super(ColorMcGenerator, self)._get_tasks(params)
388 self.tasksets[params] = ts
389
390 return self.tasksets[params]
391
349 def _customize(self, task_system, params): 392 def _customize(self, task_system, params):
350 '''Add coloring properties to the mixed-criticality task system.''' 393 '''Add coloring properties to the mixed-criticality task system.'''
394 pages_needed = self.__get_wss_pages(params)
395 real_wss = params['page_size'] * pages_needed
396
351 # Every task needs a unique id for coloring and wss walk order 397 # Every task needs a unique id for coloring and wss walk order
352 all_tasks = [] 398 all_tasks = []
353 for level, tasks in task_system.iteritems(): 399 for level, tasks in task_system.iteritems():
354 all_tasks += tasks 400 all_tasks += tasks
355 for i, task in enumerate(all_tasks): 401 for i, task in enumerate(all_tasks):
356 task.id = i 402 task.id = i
403 task.wss = real_wss
404 task.loops = self.__get_loops(task, pages_needed, params['host'])
357 405
358 c = params['colors'] 406 c = params['colors']
359 w = params['ways'] 407 w = params['ways']
@@ -365,8 +413,6 @@ class ColorMcGenerator(McGenerator):
365 srt_colorer = RandomColorScheme(c, w) 413 srt_colorer = RandomColorScheme(c, w)
366 hrt_colorer = BlockColorScheme(c, w, way_first=True) 414 hrt_colorer = BlockColorScheme(c, w, way_first=True)
367 415
368 pages_needed = self.__get_wss_pages(params)
369
370 hrt_colorer.color(task_system['lvlb'], pages_needed) 416 hrt_colorer.color(task_system['lvlb'], pages_needed)
371 srt_colorer.color(task_system['lvlc'], pages_needed) 417 srt_colorer.color(task_system['lvlc'], pages_needed)
372 418
diff --git a/gen_exps.py b/gen_exps.py
index 6488cdc..b847661 100755
--- a/gen_exps.py
+++ b/gen_exps.py
@@ -7,6 +7,7 @@ import re
7import shutil as sh 7import shutil as sh
8import sys 8import sys
9 9
10from config.config import DEFAULTS
10from optparse import OptionParser 11from optparse import OptionParser
11 12
12def parse_args(): 13def parse_args():
@@ -15,7 +16,7 @@ def parse_args():
15 16
16 parser.add_option('-o', '--out-dir', dest='out_dir', 17 parser.add_option('-o', '--out-dir', dest='out_dir',
17 help='directory for data output', 18 help='directory for data output',
18 default=("%s/exps"%os.getcwd())) 19 default=("%s/%s"% (os.getcwd(), DEFAULTS['out-gen'])))
19 parser.add_option('-f', '--force', action='store_true', default=False, 20 parser.add_option('-f', '--force', action='store_true', default=False,
20 dest='force', help='overwrite existing data') 21 dest='force', help='overwrite existing data')
21 parser.add_option('-n', '--num-trials', default=1, type='int', dest='trials', 22 parser.add_option('-n', '--num-trials', default=1, type='int', dest='trials',
@@ -51,9 +52,9 @@ def main():
51 if opts.described != None: 52 if opts.described != None:
52 for generator in opts.described.split(','): 53 for generator in opts.described.split(','):
53 if generator not in gen.get_generators(): 54 if generator not in gen.get_generators():
54 print("No generator '%s'" % generator) 55 sys.stderr.write("No generator '%s'\n" % generator)
55 else: 56 else:
56 sys.stdout.write("Generator '%s', " % generator) 57 print("Generator '%s', " % generator)
57 gen.get_generators()[generator]().print_help() 58 gen.get_generators()[generator]().print_help()
58 if opts.list_gens or opts.described: 59 if opts.list_gens or opts.described:
59 return 0 60 return 0
@@ -85,7 +86,7 @@ def main():
85 if gen_name not in gen.get_generators(): 86 if gen_name not in gen.get_generators():
86 raise ValueError("Invalid generator '%s'" % gen_name) 87 raise ValueError("Invalid generator '%s'" % gen_name)
87 88
88 print("Creating experiments using %s generator..." % gen_name) 89 sys.stderr.write("Creating experiments with %s generator...\n" % gen_name)
89 90
90 params = dict(gen_params.items() + global_params.items()) 91 params = dict(gen_params.items() + global_params.items())
91 clazz = gen.get_generators()[gen_name] 92 clazz = gen.get_generators()[gen_name]
@@ -94,5 +95,7 @@ def main():
94 95
95 generator.create_exps(opts.out_dir, opts.force, opts.trials) 96 generator.create_exps(opts.out_dir, opts.force, opts.trials)
96 97
98 sys.stderr.write("Experiments saved in %s.\n" % opts.out_dir)
99
97if __name__ == '__main__': 100if __name__ == '__main__':
98 main() 101 main()
diff --git a/parse/point.py b/parse/point.py
index ac47c70..b1d9d53 100644
--- a/parse/point.py
+++ b/parse/point.py
@@ -133,6 +133,10 @@ class ExpPoint(object):
133 def get_stats(self): 133 def get_stats(self):
134 return self.stats.keys() 134 return self.stats.keys()
135 135
136 def __bool__(self):
137 return bool(self.stats)
138 __nonzero__ = __bool__
139
136 140
137class SummaryPoint(ExpPoint): 141class SummaryPoint(ExpPoint):
138 def __init__(self, id="", points=[], typemap = default_typemap): 142 def __init__(self, id="", points=[], typemap = default_typemap):
diff --git a/parse/sched.py b/parse/sched.py
index 1f07751..1033989 100644
--- a/parse/sched.py
+++ b/parse/sched.py
@@ -3,67 +3,119 @@ import os
3import re 3import re
4import struct 4import struct
5import subprocess 5import subprocess
6import sys
7 6
8from collections import defaultdict,namedtuple 7from collections import defaultdict,namedtuple
9from common import recordtype 8from common import recordtype,log_once
10from point import Measurement 9from point import Measurement
10from ctypes import *
11
12LOSS_MSG = """Found task missing more than %d%% of its scheduling records.
13These won't be included in scheduling statistics!"""%(100*conf.MAX_RECORD_LOSS)
14SKIP_MSG = """Measurement '%s' has no non-zero values.
15Measurements like these are not included in scheduling statistics.
16If a measurement is missing, this is why."""
17SCALE_MSG = """Task in {} with config {} has < 1.0 scale!
18These scales are skipped in measurements."""
19
20# Data stored for each task
21TaskParams = namedtuple('TaskParams', ['wcet', 'period', 'cpu', 'level'])
22TaskData = recordtype('TaskData', ['params', 'jobs', 'loads',
23 'blocks', 'misses', 'execs'])
24
25ScaleData = namedtuple('ScaleData', ['reg_tasks', 'base_tasks'])
11 26
12class TimeTracker: 27class TimeTracker:
13 '''Store stats for durations of time demarcated by sched_trace records.''' 28 '''Store stats for durations of time demarcated by sched_trace records.'''
14 def __init__(self): 29 def __init__(self):
15 self.begin = self.avg = self.max = self.num = self.job = 0 30 self.begin = self.avg = self.max = self.num = self.next_job = 0
31
32 # Count of times the job in start_time matched that in store_time
33 self.matches = 0
34 # And the times it didn't
35 self.disjoints = 0
16 36
17 def store_time(self, record): 37 # Measurements are recorded in store_ time using the previous matching
38 # record which was passed to store_time. This way, the last record for
39 # any task is always skipped
40 self.last_record = None
41
42 def store_time(self, next_record):
18 '''End duration of time.''' 43 '''End duration of time.'''
19 dur = record.when - self.begin 44 dur = (self.last_record.when - self.begin) if self.last_record else -1
45
46 if self.next_job == next_record.job:
47 self.last_record = next_record
20 48
21 if self.job == record.job and dur > 0: 49 if self.last_record:
22 self.max = max(self.max, dur) 50 self.matches += 1
23 self.avg *= float(self.num / (self.num + 1))
24 self.num += 1
25 self.avg += dur / float(self.num)
26 51
27 self.begin = 0 52 if dur > 0:
28 self.job = 0 53 self.max = max(self.max, dur)
54 self.avg *= float(self.num / (self.num + 1))
55 self.num += 1
56 self.avg += dur / float(self.num)
29 57
30 def start_time(self, record): 58 self.begin = 0
59 self.next_job = 0
60 else:
61 self.disjoints += 1
62
63 def start_time(self, record, time = None):
31 '''Start duration of time.''' 64 '''Start duration of time.'''
32 self.begin = record.when 65 if self.last_record:
33 self.job = record.job 66 if not time:
67 self.begin = self.last_record.when
68 else:
69 self.begin = time
34 70
35# Data stored for each task 71 self.next_job = record.job
36TaskParams = namedtuple('TaskParams', ['wcet', 'period', 'cpu', 'level'])
37TaskData = recordtype('TaskData', ['params', 'jobs', 'loads',
38 'blocks', 'misses', 'execs'])
39 72
40# Map of event ids to corresponding class, binary format, and processing methods
41RecordInfo = namedtuple('RecordInfo', ['clazz', 'fmt', 'method'])
42record_map = [0]*10
43 73
44# Common to all records 74class LeveledArray(object):
45HEADER_FORMAT = '<bbhi' 75 """Groups statistics by the level of the task to which they apply"""
46HEADER_FIELDS = ['type', 'cpu', 'pid', 'job'] 76 def __init__(self):
47RECORD_SIZE = 24 77 self.name = name
78 self.vals = defaultdict(lambda: defaultdict(lambda:[]))
79
80 def add(self, name, level, value):
81 if type(value) != type([]):
82 value = [value]
83 self.vals[name][level] += value
84
85 def write_measurements(self, result):
86 for stat_name, stat_data in self.vals.iteritems():
87 for level, values in stat_data.iteritems():
88 if not values or not sum(values):
89 log_once(SKIP_MSG, SKIP_MSG % stat_name)
90 continue
91
92 name = "%s%s" % ("%s-" % level if level else "", stat_name)
93 result[name] = Measurement(name).from_array(arr)
48 94
95# Map of event ids to corresponding class and format
96record_map = {}
97
98RECORD_SIZE = 24
49NSEC_PER_MSEC = 1000000 99NSEC_PER_MSEC = 1000000
50 100
51def register_record(name, id, method, fmt, fields): 101def register_record(id, clazz):
52 '''Create record description from @fmt and @fields and map to @id, using 102 fields = clazz.FIELDS
53 @method to process parsed record.'''
54 # Format of binary data (see python struct documentation)
55 rec_fmt = HEADER_FORMAT + fmt
56 103
57 # Corresponding field data 104 fsize = lambda fields : sum([sizeof(list(f)[1]) for f in fields])
58 rec_fields = HEADER_FIELDS + fields 105 diff = RECORD_SIZE - fsize(SchedRecord.FIELDS) - fsize(fields)
59 if "when" not in rec_fields: # Force a "when" field for everything
60 rec_fields += ["when"]
61 106
62 # Create mutable class with the given fields 107 # Create extra padding fields to make record the proper size
63 field_class = recordtype(name, list(rec_fields)) 108 # Creating one big field of c_uint64 and giving it a size of 8*diff
64 clazz = type(name, (field_class, object), {}) 109 # _shoud_ work, but doesn't. This is an uglier way of accomplishing
110 # the same goal
111 for d in range(diff):
112 fields += [("extra%d" % d, c_char)]
65 113
66 record_map[id] = RecordInfo(clazz, rec_fmt, method) 114 # Create structure with fields and methods of clazz
115 clazz2 = type("Dummy%d" % id, (LittleEndianStructure,clazz),
116 {'_fields_': SchedRecord.FIELDS + fields,
117 '_pack_' : 1})
118 record_map[id] = clazz2
67 119
68def make_iterator(fname): 120def make_iterator(fname):
69 '''Iterate over (parsed record, processing method) in a 121 '''Iterate over (parsed record, processing method) in a
@@ -73,7 +125,6 @@ def make_iterator(fname):
73 return 125 return
74 126
75 f = open(fname, 'rb') 127 f = open(fname, 'rb')
76 max_type = len(record_map)
77 128
78 while True: 129 while True:
79 data = f.read(RECORD_SIZE) 130 data = f.read(RECORD_SIZE)
@@ -83,156 +134,169 @@ def make_iterator(fname):
83 except struct.error: 134 except struct.error:
84 break 135 break
85 136
86 rdata = record_map[type_num] if type_num <= max_type else 0 137 if type_num not in record_map:
87 if not rdata:
88 continue 138 continue
89 139
90 try: 140 clazz = record_map[type_num]
91 values = struct.unpack_from(rdata.fmt, data) 141 obj = clazz()
92 except struct.error: 142 obj.fill(data)
93 continue
94 143
95 obj = rdata.clazz(*values) 144 if obj.job != 1:
96 yield (obj, rdata.method) 145 yield obj
146 else:
147 # Results from the first job are nonsense
148 pass
97 149
98def read_data(task_dict, fnames): 150def read_data(task_dict, fnames):
99 '''Read records from @fnames and store per-pid stats in @task_dict.''' 151 '''Read records from @fnames and store per-pid stats in @task_dict.'''
100 buff = [] 152 buff = []
101 153
154 def get_time(record):
155 return record.when if hasattr(record, 'when') else 0
156
102 def add_record(itera): 157 def add_record(itera):
103 # Ordered insertion into buff 158 # Ordered insertion into buff
104 try: 159 try:
105 next_ret = itera.next() 160 arecord = itera.next()
106 except StopIteration: 161 except StopIteration:
107 return 162 return
108 163
109 arecord, method = next_ret
110 i = 0 164 i = 0
111 for (i, (brecord, m, t)) in enumerate(buff): 165 for (i, (brecord, _)) in enumerate(buff):
112 if brecord.when > arecord.when: 166 if get_time(brecord) > get_time(arecord):
113 break 167 break
114 buff.insert(i, (arecord, method, itera)) 168 buff.insert(i, (arecord, itera))
115 169
116 for fname in fnames: 170 for fname in fnames:
117 itera = make_iterator(fname) 171 itera = make_iterator(fname)
118 add_record(itera) 172 add_record(itera)
119 173
120 while buff: 174 while buff:
121 (record, method, itera) = buff.pop(0) 175 record, itera = buff.pop(0)
122 176
123 add_record(itera) 177 add_record(itera)
124 method(task_dict, record) 178 record.process(task_dict)
125 179
126def process_completion(task_dict, record): 180class SchedRecord(object):
127 task_dict[record.pid].misses.store_time(record) 181 # Subclasses will have their FIELDs merged into this one
128 task_dict[record.pid].loads += [record.load] 182 FIELDS = [('type', c_uint8), ('cpu', c_uint8),
129 183 ('pid', c_uint16), ('job', c_uint32)]
130def process_release(task_dict, record): 184
131 data = task_dict[record.pid] 185 def fill(self, data):
132 data.jobs += 1 186 memmove(addressof(self), data, RECORD_SIZE)
133 data.misses.start_time(record) 187
134 188 def process(self, task_dict):
135def process_param(task_dict, record): 189 raise NotImplementedError()
136 level = chr(97 + record.level) 190
137 params = TaskParams(record.wcet, record.period, 191class ParamRecord(SchedRecord):
138 record.partition, level) 192 FIELDS = [('wcet', c_uint32), ('period', c_uint32),
139 task_dict[record.pid].params = params 193 ('phase', c_uint32), ('partition', c_uint8),
140 194 ('class', c_uint8), ('level', c_uint8)]
141def process_block(task_dict, record): 195
142 task_dict[record.pid].blocks.start_time(record) 196 def process(self, task_dict):
143 197 params = TaskParams(self.wcet, self.period,
144def process_resume(task_dict, record): 198 self.partition, self.level)
145 task_dict[record.pid].blocks.store_time(record) 199 task_dict[self.pid].params = params
146 200
147def process_switch_to(task_dict, record): 201class ReleaseRecord(SchedRecord):
148 task_dict[record.pid].execs.start_time(record) 202 FIELDS = [('when', c_uint64), ('release', c_uint64)]
149 203
150def process_switch_away(task_dict, record): 204 def process(self, task_dict):
151 task_dict[record.pid].execs.store_time(record) 205 data = task_dict[self.pid]
152 206 data.jobs += 1
153register_record('ResumeRecord', 9, process_resume, 'Q8x', ['when']) 207 if data.params:
154register_record('BlockRecord', 8, process_block, 'Q8x', ['when']) 208 data.misses.start_time(self, self.when + data.params.period)
155register_record('CompletionRecord', 7, process_completion, 'QQ', ['when', 'load']) 209
156register_record('ReleaseRecord', 3, process_release, 'QQ', ['release', 'when']) 210class CompletionRecord(SchedRecord):
157register_record('SwitchToRecord', 5, process_switch_to, 'Q8x', ['when']) 211 FIELDS = [('when', c_uint64)]
158register_record('SwitchAwayRecord', 6, process_switch_away, 'Q8x', ['when']) 212
159register_record('ParamRecord', 2, process_param, 'IIIcccx', 213 def process(self, task_dict):
160 ['wcet','period','phase','partition', 'task_class', 'level']) 214 task_dict[self.pid].misses.store_time(self)
161 215
162saved_stats = [] 216class BlockRecord(SchedRecord):
163def get_task_data(data_dir, work_dir = None): 217 FIELDS = [('when', c_uint64)]
218
219 def process(self, task_dict):
220 task_dict[self.pid].blocks.start_time(self)
221
222class ResumeRecord(SchedRecord):
223 FIELDS = [('when', c_uint64)]
224
225 def process(self, task_dict):
226 task_dict[self.pid].blocks.store_time(self)
227
228# Map records to sched_trace ids (see include/litmus/sched_trace.h
229register_record(2, ParamRecord)
230register_record(3, ReleaseRecord)
231register_record(7, CompletionRecord)
232register_record(8, BlockRecord)
233register_record(9, ResumeRecord)
234
235__all_dicts = {}
236
237def create_task_dict(data_dir, work_dir = None):
164 '''Parse sched trace files''' 238 '''Parse sched trace files'''
165 if data_dir in saved_stats: 239 if data_dir in __all_dicts:
166 return data_dir 240 return __all_dicts[data_dir]
167 241
168 bin_files = conf.FILES['sched_data'].format(".*") 242 bin_files = conf.FILES['sched_data'].format(".*")
169 output_file = "%s/out-st" % work_dir 243 output_file = "%s/out-st" % work_dir
170 244
171 bins = ["%s/%s" % (data_dir,f) for f in os.listdir(data_dir) if re.match(bin_files, f)] 245 task_dict = defaultdict(lambda :
172 if not len(bins): 246 TaskData(None, 1, TimeTracker(), TimeTracker()))
173 return 247
248 bin_names = [f for f in os.listdir(data_dir) if re.match(bin_files, f)]
249 if not len(bin_names):
250 return task_dict
174 251
175 # Save an in-english version of the data for debugging 252 # Save an in-english version of the data for debugging
176 # This is optional and will only be done if 'st_show' is in PATH 253 # This is optional and will only be done if 'st_show' is in PATH
177 if work_dir and conf.BINS['st_show']: 254 if work_dir and conf.BINS['st_show']:
178 cmd_arr = [conf.BINS['st_show']] 255 cmd_arr = [conf.BINS['st_show']]
179 cmd_arr.extend(bins) 256 cmd_arr.extend(bin_names)
180 with open(output_file, "w") as f: 257 with open(output_file, "w") as f:
181 print("calling %s" % cmd_arr)
182 subprocess.call(cmd_arr, cwd=data_dir, stdout=f) 258 subprocess.call(cmd_arr, cwd=data_dir, stdout=f)
183 259
184 task_dict = defaultdict(lambda :TaskData(0, 0, 0, [], TimeTracker(),
185 TimeTracker(), TimeTracker()))
186
187 # Gather per-task values 260 # Gather per-task values
188 read_data(task_dict, bins) 261 bin_paths = ["%s/%s" % (data_dir,f) for f in bin_names]
262 read_data(task_dict, bin_paths)
189 263
190 saved_stats[data_dir] = task_dict 264 __all_dicts[data_dir] = task_dict
191 return task_dict
192 265
193class LeveledArray(object): 266 return task_dict
194 """Groups statistics by the level of the task to which they apply"""
195 def __init__(self):
196 self.name = name
197 self.vals = defaultdict(lambda: defaultdict(lambda:[]))
198
199 def add(self, name, level, value):
200 if type(value) != type([]):
201 value = [value]
202 self.vals[name][task.config.level] += value
203
204 def write_measurements(self, result):
205 for stat_name, stat_data in self.vals.iteritems():
206 for level, values in stat_data.iteritems():
207 if not values:
208 continue
209
210 name = "%s%s" % ("%s-" % level if level else "", stat_name)
211 result[name] = Measurement(name).from_array(arr)
212 267
213def extract_sched_data(result, data_dir, work_dir): 268def extract_sched_data(result, data_dir, work_dir):
214 task_dict = get_task_data(data_dir, work_dir) 269 task_dict = create_task_dict(data_dir, work_dir)
215
216 stat_data = LeveledArray() 270 stat_data = LeveledArray()
271
217 for tdata in task_dict.itervalues(): 272 for tdata in task_dict.itervalues():
218 if not tdata.params: 273 if not tdata.params:
219 # Currently unknown where these invalid tasks come from... 274 # Currently unknown where these invalid tasks come from...
220 continue 275 continue
221 276
222 miss_ratio = float(tdata.misses.num) / tdata.jobs 277 level = tdata.config.level
223 # Scale average down to account for jobs with 0 tardiness 278 miss = tdata.misses
224 avg_tard = tdata.misses.avg * miss_ratio 279
280 record_loss = float(miss.disjoints)/(miss.matches + miss.disjoints)
281 stat_data("record-loss", level, record_loss)
282
283 if record_loss > conf.MAX_RECORD_LOSS:
284 log_once(LOSS_MSG)
285 continue
286
287 miss_ratio = float(miss.num) / miss.matches
288 avg_tard = miss.avg * miss_ratio
225 289
226 level = tdata.params.level 290 stat_data("miss-ratio", level, miss_ratio)
227 stat_data.add("miss-ratio", level, miss_ratio) 291
228 stat_data.add("avg-tard", level, avg_tard / tdata.params.wcet) 292 stat_data("max-tard", level, miss.max / tdata.params.period)
229 stat_data.add("max-tard", level, tdata.misses.max / tdata.params.wcet) 293 stat_data("avg-tard", level, avg_tard / tdata.params.period)
230 stat_data.add("avg-block", level, tdata.blocks.avg / NSEC_PER_MSEC) 294
231 stat_data.add("max-block", level, tdata.blocks.max / NSEC_PER_MSEC) 295 stat_data("avg-block", level, tdata.blocks.avg / NSEC_PER_MSEC)
296 stat_data("max-block", level, tdata.blocks.max / NSEC_PER_MSEC)
232 297
233 stat_data.write_measurements(result) 298 stat_data.write_measurements(result)
234 299
235ScaleData = namedtuple('ScaleData', ['reg_tasks', 'base_tasks'])
236def extract_mc_data(result, data_dir, base_dir): 300def extract_mc_data(result, data_dir, base_dir):
237 task_dict = get_task_data(data_dir) 301 task_dict = get_task_data(data_dir)
238 base_dict = get_task_data(base_dir) 302 base_dict = get_task_data(base_dir)
@@ -245,12 +309,14 @@ def extract_mc_data(result, data_dir, base_dir):
245 309
246 tasks_by_config = defaultdict(lambda: ScaleData([], [])) 310 tasks_by_config = defaultdict(lambda: ScaleData([], []))
247 311
248 # Add tasks in order of pid to tasks_by_config 312 # Add task execution times in order of pid to tasks_by_config
249 # Tasks must be ordered by pid or we can't make 1 to 1 comparisons
250 # when multiple tasks have the same config in each task set
251 for tasks, field in ((task_dict, 'reg_tasks'), (base_dict, 'base_tasks')): 313 for tasks, field in ((task_dict, 'reg_tasks'), (base_dict, 'base_tasks')):
314 # Sorted for tie breaking: if 3 regular tasks have the same config
315 # (so 3 base tasks also have the same config), match first pid regular
316 # with first pid base, etc. This matches tie breaking in kernel
252 for pid in sorted(tasks.keys()): 317 for pid in sorted(tasks.keys()):
253 tdata = tasks[pid] 318 tdata = tasks[pid]
319
254 tlist = getattr(tasks_by_config[tdata.params], field) 320 tlist = getattr(tasks_by_config[tdata.params], field)
255 tlist += [tdata.execs] 321 tlist += [tdata.execs]
256 322
@@ -260,7 +326,10 @@ def extract_mc_data(result, data_dir, base_dir):
260 # Can't make comparison if different numbers of tasks! 326 # Can't make comparison if different numbers of tasks!
261 continue 327 continue
262 328
329 # Tuples of (regular task execution times, base task execution times)
330 # where each has the same configuration
263 all_pairs = zip(scale_data.reg_tasks, scale_data.base_tasks) 331 all_pairs = zip(scale_data.reg_tasks, scale_data.base_tasks)
332
264 for reg_execs, base_execs in all_pairs: 333 for reg_execs, base_execs in all_pairs:
265 if not reg_execs.max or not reg_execs.avg or\ 334 if not reg_execs.max or not reg_execs.avg or\
266 not base_execs.max or not base_execs.avg: 335 not base_execs.max or not base_execs.avg:
@@ -271,8 +340,7 @@ def extract_mc_data(result, data_dir, base_dir):
271 avg_scale = float(base_execs.avg) / reg_execs.avg 340 avg_scale = float(base_execs.avg) / reg_execs.avg
272 341
273 if (avg_scale < 1 or max_scale < 1) and config.level == "b": 342 if (avg_scale < 1 or max_scale < 1) and config.level == "b":
274 sys.stderr.write("Task in {} with config {} has <1.0 scale!" 343 log_once(SCALE_MSG, SCALE_MSG.format(data_dir, config))
275 .format(data_dir, config)
276 continue 344 continue
277 345
278 stat_data.add('max-scale', config.level, max_scale) 346 stat_data.add('max-scale', config.level, max_scale)
diff --git a/parse/tuple_table.py b/parse/tuple_table.py
index 47fb6b6..6edc037 100644
--- a/parse/tuple_table.py
+++ b/parse/tuple_table.py
@@ -13,13 +13,17 @@ class TupleTable(object):
13 def get_col_map(self): 13 def get_col_map(self):
14 return self.col_map 14 return self.col_map
15 15
16 def __bool__(self):
17 return bool(self.table)
18 __nonzero__ = __bool__
19
16 def __getitem__(self, kv): 20 def __getitem__(self, kv):
17 key = self.col_map.get_key(kv) 21 key = self.col_map.get_key(kv)
18 return self.table[key] 22 return self.table[key]
19 23
20 def __setitem__(self, kv, value): 24 def __setitem__(self, kv, value):
21 key = self.col_map.get_key(kv) 25 key = self.col_map.get_key(kv)
22 self.table[key] 26 self.table[key] = value
23 27
24 def __contains__(self, kv): 28 def __contains__(self, kv):
25 key = self.col_map.get_key(kv) 29 key = self.col_map.get_key(kv)
diff --git a/parse_exps.py b/parse_exps.py
index 9475cfc..7a99d8a 100755
--- a/parse_exps.py
+++ b/parse_exps.py
@@ -1,8 +1,9 @@
1#!/usr/bin/env python 1#!/usr/bin/env python
2from __future__ import print_function 2from __future__ import print_function
3 3
4import config.config as conf
5import copy 4import copy
5import common as com
6import multiprocessing
6import os 7import os
7import parse.ft as ft 8import parse.ft as ft
8import parse.sched as st 9import parse.sched as st
@@ -13,18 +14,19 @@ import sys
13import traceback 14import traceback
14 15
15from collections import namedtuple 16from collections import namedtuple
16from common import load_params 17from config.config import DEFAULTS,PARAMS
17from optparse import OptionParser 18from optparse import OptionParser
18from parse.point import ExpPoint 19from parse.point import ExpPoint
19from parse.tuple_table import TupleTable 20from parse.tuple_table import TupleTable
20from parse.col_map import ColMapBuilder 21from parse.col_map import ColMapBuilder
21from multiprocessing import Pool, cpu_count 22
22 23
23def parse_args(): 24def parse_args():
24 parser = OptionParser("usage: %prog [options] [data_dir]...") 25 parser = OptionParser("usage: %prog [options] [data_dir]...")
25 26
26 parser.add_option('-o', '--out', dest='out', 27 parser.add_option('-o', '--out', dest='out',
27 help='file or directory for data output', default='parse-data') 28 help='file or directory for data output',
29 default=DEFAULTS['out-parse'])
28 parser.add_option('-i', '--ignore', metavar='[PARAM...]', default="", 30 parser.add_option('-i', '--ignore', metavar='[PARAM...]', default="",
29 help='ignore changing parameter values') 31 help='ignore changing parameter values')
30 parser.add_option('-f', '--force', action='store_true', default=False, 32 parser.add_option('-f', '--force', action='store_true', default=False,
@@ -34,7 +36,8 @@ def parse_args():
34 parser.add_option('-m', '--write-map', action='store_true', default=False, 36 parser.add_option('-m', '--write-map', action='store_true', default=False,
35 dest='write_map', 37 dest='write_map',
36 help='Output map of values instead of csv tree') 38 help='Output map of values instead of csv tree')
37 parser.add_option('-p', '--processors', default=max(cpu_count() - 1, 1), 39 parser.add_option('-p', '--processors',
40 default=max(multiprocessing.cpu_count() - 1, 1),
38 type='int', dest='processors', 41 type='int', dest='processors',
39 help='number of threads for processing') 42 help='number of threads for processing')
40 parser.add_option('-s', '--scale-against', dest='scale_against', 43 parser.add_option('-s', '--scale-against', dest='scale_against',
@@ -43,12 +46,59 @@ def parse_args():
43 46
44 return parser.parse_args() 47 return parser.parse_args()
45 48
49
46ExpData = namedtuple('ExpData', ['path', 'params', 'work_dir']) 50ExpData = namedtuple('ExpData', ['path', 'params', 'work_dir'])
47 51
52
53def parse_exp(exp_force_base):
54 # Tupled for multiprocessing
55 exp, force, base_table = exp_force_base
56
57 result_file = exp.work_dir + "/exp_point.pkl"
58 should_load = not force and os.path.exists(result_file)
59
60 result = None
61 if should_load:
62 with open(result_file, 'rb') as f:
63 try:
64 # No need to go through this work twice
65 result = pickle.load(f)
66 except:
67 pass
68
69 if not result:
70 try:
71 # Create a readable name
72 name = os.path.relpath(exp.path)
73 name = name if name != "." else os.path.split(os.getcwd())[1]
74
75 result = ExpPoint(name)
76
77 # Write overheads into result
78 cycles = exp.params[PARAMS['cycles']]
79 ft.extract_ft_data(result, exp.path, exp.work_dir, cycles)
80
81 # Write scheduling statistics into result
82 st.extract_sched_data(result, exp.path, exp.work_dir)
83
84 # Write scaling factors into result
85 if base_table and exp.params in base_table:
86 base_exp = base_table[exp.params]
87 if base_exp != exp:
88 st.extract_mc_data(result, exp.path, base_exp.path)
89
90 with open(result_file, 'wb') as f:
91 pickle.dump(result, f)
92 except:
93 traceback.print_exc()
94
95 return (exp, result)
96
97
48def get_exp_params(data_dir, cm_builder): 98def get_exp_params(data_dir, cm_builder):
49 param_file = "%s/%s" % (data_dir, conf.DEFAULTS['params_file']) 99 param_file = "%s/%s" % (data_dir, DEFAULTS['params_file'])
50 if os.path.isfile(param_file): 100 if os.path.isfile(param_file):
51 params = load_params(param_file) 101 params = com.load_params(param_file)
52 102
53 # Store parameters in cm_builder, which will track which parameters change 103 # Store parameters in cm_builder, which will track which parameters change
54 # across experiments 104 # across experiments
@@ -58,8 +108,8 @@ def get_exp_params(data_dir, cm_builder):
58 params = {} 108 params = {}
59 109
60 # Cycles must be present for feather-trace measurement parsing 110 # Cycles must be present for feather-trace measurement parsing
61 if conf.PARAMS['cycles'] not in params: 111 if PARAMS['cycles'] not in params:
62 params[conf.PARAMS['cycles']] = conf.DEFAULTS['cycles'] 112 params[PARAMS['cycles']] = DEFAULTS['cycles']
63 113
64 return params 114 return params
65 115
@@ -87,44 +137,6 @@ def load_exps(exp_dirs, cm_builder, force):
87 137
88 return exps 138 return exps
89 139
90def parse_exp(exp_force_base):
91 # Tupled for multiprocessing
92 exp, force, base_table = exp_force_base
93
94 result_file = exp.work_dir + "/exp_point.pkl"
95 should_load = not force and os.path.exists(result_file)
96
97 result = None
98 if should_load:
99 with open(result_file, 'rb') as f:
100 try:
101 # No need to go through this work twice
102 result = pickle.load(f)
103 except:
104 pass
105
106 if not result:
107 try:
108 result = ExpPoint(exp.path)
109 cycles = exp.params[conf.PARAMS['cycles']]
110
111 # Write overheads into result
112 ft.extract_ft_data(result, exp.path, exp.work_dir, cycles)
113
114 # Write scheduling statistics into result
115 st.extract_sched_data(result, exp.path, exp.work_dir)
116
117 if base_table and exp.params in base_table:
118 base_exp = base_table[exp.params]
119 if base_exp != exp:
120 st.extract_scaling_data(result, exp.path, base_exp.path)
121
122 with open(result_file, 'wb') as f:
123 pickle.dump(result, f)
124 except:
125 traceback.print_exc()
126
127 return (exp, result)
128 140
129def make_base_table(cmd_scale, col_map, exps): 141def make_base_table(cmd_scale, col_map, exps):
130 if not cmd_scale: 142 if not cmd_scale:
@@ -137,8 +149,7 @@ def make_base_table(cmd_scale, col_map, exps):
137 if param not in col_map: 149 if param not in col_map:
138 raise IOError("Base column '%s' not present in any parameters!" % param) 150 raise IOError("Base column '%s' not present in any parameters!" % param)
139 151
140 base_map = copy.deepcopy(col_map) 152 base_table = TupleTable(copy.deepcopy(col_map))
141 base_table = TupleTable(base_map)
142 153
143 # Fill table with exps who we will scale against 154 # Fill table with exps who we will scale against
144 for exp in exps: 155 for exp in exps:
@@ -147,41 +158,49 @@ def make_base_table(cmd_scale, col_map, exps):
147 158
148 return base_table 159 return base_table
149 160
150def main():
151 opts, args = parse_args()
152 161
153 args = args or [os.getcwd()] 162def get_dirs(args):
154 163 if args:
155 # Load exp parameters into a ColMap 164 return args
156 builder = ColMapBuilder() 165 elif os.path.exists(DEFAULTS['out-run']):
157 exps = load_exps(args, builder, opts.force) 166 sys.stderr.write("Reading data from %s/*\n" % DEFAULTS['out-run'])
167 sched_dirs = os.listdir(DEFAULTS['out-run'])
168 return ['%s/%s' % (DEFAULTS['out-run'], d) for d in sched_dirs]
169 else:
170 sys.stderr.write("Reading data from current directory.\n")
171 return [os.getcwd()]
158 172
159 # Don't track changes in ignored parameters
160 if opts.ignore:
161 for param in opts.ignore.split(","):
162 builder.try_remove(param)
163 # Always average multiple trials
164 builder.try_remove(conf.PARAMS['trial'])
165 173
166 col_map = builder.build() 174def fill_table(table, exps, opts):
167 result_table = TupleTable(col_map) 175 sys.stderr.write("Parsing data...\n")
168 176
169 base_table = make_base_table(opts.scale_against, col_map, exps) 177 procs = min(len(exps), opts.processors)
178 logged = multiprocessing.Manager().list()
170 179
171 sys.stderr.write("Parsing data...\n") 180 sys.stderr.write("Parsing data...\n")
172 181
173 procs = min(len(exps), opts.processors) 182 base_table = make_base_table(opts.scale_against,
174 pool = Pool(processes=procs) 183 table.get_col_map(), exps)
184
185 pool = multiprocessing.Pool(processes=procs,
186 # Share a list of previously logged messages amongst processes
187 # This is for the com.log_once method to use
188 initializer=com.set_logged_list, initargs=(logged,))
189
175 pool_args = zip(exps, [opts.force]*len(exps), [base_table]*len(exps)) 190 pool_args = zip(exps, [opts.force]*len(exps), [base_table]*len(exps))
176 enum = pool.imap_unordered(parse_exp, pool_args, 1) 191 enum = pool.imap_unordered(parse_exp, pool_args, 1)
177 192
178 try: 193 try:
179 for i, (exp, result) in enumerate(enum): 194 for i, (exp, result) in enumerate(enum):
195 if not result:
196 continue
197
180 if opts.verbose: 198 if opts.verbose:
181 print(result) 199 print(result)
182 else: 200 else:
183 sys.stderr.write('\r {0:.2%}'.format(float(i)/len(exps))) 201 sys.stderr.write('\r {0:.2%}'.format(float(i)/len(exps)))
184 result_table[exp.params] += [result] 202 table[exp.params] += [result]
203
185 pool.close() 204 pool.close()
186 except: 205 except:
187 pool.terminate() 206 pool.terminate()
@@ -192,29 +211,61 @@ def main():
192 211
193 sys.stderr.write('\n') 212 sys.stderr.write('\n')
194 213
195 if opts.force and os.path.exists(opts.out):
196 sh.rmtree(opts.out)
197 214
198 reduced_table = result_table.reduce() 215def write_output(table, opts):
216 reduced_table = table.reduce()
199 217
200 if opts.write_map: 218 if opts.write_map:
201 sys.stderr.write("Writing python map into %s...\n" % opts.out) 219 sys.stderr.write("Writing python map into %s...\n" % opts.out)
202 # Write summarized results into map
203 reduced_table.write_map(opts.out) 220 reduced_table.write_map(opts.out)
204 else: 221 else:
222 if opts.force and os.path.exists(opts.out):
223 sh.rmtree(opts.out)
224
205 # Write out csv directories for all variable params 225 # Write out csv directories for all variable params
206 dir_map = reduced_table.to_dir_map() 226 dir_map = reduced_table.to_dir_map()
207 227
208 # No csvs to write, assume user meant to print out data 228 # No csvs to write, assume user meant to print out data
209 if dir_map.is_empty(): 229 if dir_map.is_empty():
210 if not opts.verbose: 230 if not opts.verbose:
211 sys.stderr.write("Too little data to make csv files.\n") 231 sys.stderr.write("Too little data to make csv files, " +
212 for key, exp in result_table: 232 "printing results.\n")
233 for key, exp in table:
213 for e in exp: 234 for e in exp:
214 print(e) 235 print(e)
215 else: 236 else:
216 sys.stderr.write("Writing csvs into %s...\n" % opts.out) 237 sys.stderr.write("Writing csvs into %s...\n" % opts.out)
217 dir_map.write(opts.out) 238 dir_map.write(opts.out)
218 239
240
241def main():
242 opts, args = parse_args()
243 exp_dirs = get_dirs(args)
244
245 # Load experiment parameters into a ColMap
246 builder = ColMapBuilder()
247 exps = load_exps(exp_dirs, builder, opts.force)
248
249 # Don't track changes in ignored parameters
250 if opts.ignore:
251 for param in opts.ignore.split(","):
252 builder.try_remove(param)
253
254 # Always average multiple trials
255 builder.try_remove(PARAMS['trial'])
256 # Only need this for feather-trace parsing
257 builder.try_remove(PARAMS['cycles'])
258
259 col_map = builder.build()
260 table = TupleTable(col_map)
261
262 fill_table(table, exps, opts)
263
264 if not table:
265 sys.stderr.write("Found no data to parse!")
266 sys.exit(1)
267
268 write_output(table, opts)
269
219if __name__ == '__main__': 270if __name__ == '__main__':
220 main() 271 main()
diff --git a/plot_exps.py b/plot_exps.py
index 76e7396..15c54d0 100755
--- a/plot_exps.py
+++ b/plot_exps.py
@@ -6,7 +6,9 @@ import os
6import shutil as sh 6import shutil as sh
7import sys 7import sys
8import traceback 8import traceback
9
9from collections import namedtuple 10from collections import namedtuple
11from config.config import DEFAULTS
10from multiprocessing import Pool, cpu_count 12from multiprocessing import Pool, cpu_count
11from optparse import OptionParser 13from optparse import OptionParser
12from parse.col_map import ColMap,ColMapBuilder 14from parse.col_map import ColMap,ColMapBuilder
@@ -17,7 +19,8 @@ def parse_args():
17 parser = OptionParser("usage: %prog [options] [csv_dir]...") 19 parser = OptionParser("usage: %prog [options] [csv_dir]...")
18 20
19 parser.add_option('-o', '--out-dir', dest='out_dir', 21 parser.add_option('-o', '--out-dir', dest='out_dir',
20 help='directory for plot output', default='plot-data') 22 help='directory for plot output',
23 default=DEFAULTS['out-plot'])
21 parser.add_option('-f', '--force', action='store_true', default=False, 24 parser.add_option('-f', '--force', action='store_true', default=False,
22 dest='force', help='overwrite existing data') 25 dest='force', help='overwrite existing data')
23 parser.add_option('-p', '--processors', default=max(cpu_count() - 1, 1), 26 parser.add_option('-p', '--processors', default=max(cpu_count() - 1, 1),
@@ -139,21 +142,31 @@ def plot_dir(data_dir, out_dir, max_procs, force):
139 142
140 sys.stderr.write('\n') 143 sys.stderr.write('\n')
141 144
145def get_dirs(args):
146 if args:
147 return args
148 elif os.path.exists(DEFAULTS['out-parse']):
149 return [DEFAULTS['out-parse']]
150 else:
151 return os.getcwd()
152
142def main(): 153def main():
143 opts, args = parse_args() 154 opts, args = parse_args()
144 args = args or [os.getcwd()] 155 dirs = get_dirs(args)
145 156
146 if opts.force and os.path.exists(opts.out_dir): 157 if opts.force and os.path.exists(opts.out_dir):
147 sh.rmtree(opts.out_dir) 158 sh.rmtree(opts.out_dir)
148 if not os.path.exists(opts.out_dir): 159 if not os.path.exists(opts.out_dir):
149 os.mkdir(opts.out_dir) 160 os.mkdir(opts.out_dir)
150 161
151 for dir in args: 162 for dir in dirs:
152 if len(args) > 1: 163 if len(dirs) > 1:
153 out_dir = "%s/%s" % (opts.out_dir, os.path.split(dir)[1]) 164 out_dir = "%s/%s" % (opts.out_dir, os.path.split(dir)[1])
154 else: 165 else:
155 out_dir = opts.out_dir 166 out_dir = opts.out_dir
156 plot_dir(dir, out_dir, opts.processors, opts.force) 167 plot_dir(dir, out_dir, opts.processors, opts.force)
157 168
169 sys.stderr.write("Plots saved in %s.\n" % opts.out_dir)
170
158if __name__ == '__main__': 171if __name__ == '__main__':
159 main() 172 main()
diff --git a/run/executable/executable.py b/run/executable/executable.py
index 263e305..a2426f1 100644
--- a/run/executable/executable.py
+++ b/run/executable/executable.py
@@ -6,9 +6,10 @@ from common import get_executable
6class Executable(object): 6class Executable(object):
7 '''Parent object that represents an executable for use in task-sets.''' 7 '''Parent object that represents an executable for use in task-sets.'''
8 8
9 def __init__(self, exec_file, extra_args=None, stdout_file = None, stderr_file = None): 9 def __init__(self, exec_file, extra_args=None, stdout_file = None,
10 stderr_file = None, cwd = None):
10 self.exec_file = get_executable(exec_file) 11 self.exec_file = get_executable(exec_file)
11 self.cwd = None 12 self.cwd = cwd
12 self.stdout_file = stdout_file 13 self.stdout_file = stdout_file
13 self.stderr_file = stderr_file 14 self.stderr_file = stderr_file
14 self.sp = None 15 self.sp = None
@@ -58,6 +59,9 @@ class Executable(object):
58 def interrupt(self): 59 def interrupt(self):
59 self.sp.send_signal(signal.SIGINT) 60 self.sp.send_signal(signal.SIGINT)
60 61
62 def poll(self):
63 return self.sp.poll()
64
61 def terminate(self): 65 def terminate(self):
62 '''Send the terminate signal to the binary.''' 66 '''Send the terminate signal to the binary.'''
63 self.sp.terminate() 67 self.sp.terminate()
diff --git a/run/experiment.py b/run/experiment.py
index ff0e9f3..b0e46b6 100644
--- a/run/experiment.py
+++ b/run/experiment.py
@@ -1,9 +1,7 @@
1import common as com
2import os 1import os
3import time 2import time
4import run.litmus_util as lu 3import run.litmus_util as lu
5import shutil as sh 4import shutil as sh
6import traceback
7from operator import methodcaller 5from operator import methodcaller
8 6
9class ExperimentException(Exception): 7class ExperimentException(Exception):
@@ -11,16 +9,11 @@ class ExperimentException(Exception):
11 def __init__(self, name): 9 def __init__(self, name):
12 self.name = name 10 self.name = name
13 11
14
15class ExperimentDone(ExperimentException): 12class ExperimentDone(ExperimentException):
16 '''Raised when an experiment looks like it's been run already.''' 13 '''Raised when an experiment looks like it's been run already.'''
17 def __str__(self): 14 def __str__(self):
18 return "Experiment finished already: %d" % self.name 15 return "Experiment finished already: %d" % self.name
19 16
20class ExperimentFailed(ExperimentException):
21 def __str__(self):
22 return "Experiment failed during execution: %d" % self.name
23
24class SystemCorrupted(Exception): 17class SystemCorrupted(Exception):
25 pass 18 pass
26 19
@@ -31,7 +24,6 @@ class Experiment(object):
31 def __init__(self, name, scheduler, working_dir, finished_dir, 24 def __init__(self, name, scheduler, working_dir, finished_dir,
32 proc_entries, executables, tracer_types): 25 proc_entries, executables, tracer_types):
33 '''Run an experiment, optionally wrapped in tracing.''' 26 '''Run an experiment, optionally wrapped in tracing.'''
34
35 self.name = name 27 self.name = name
36 self.scheduler = scheduler 28 self.scheduler = scheduler
37 self.working_dir = working_dir 29 self.working_dir = working_dir
@@ -40,16 +32,10 @@ class Experiment(object):
40 self.executables = executables 32 self.executables = executables
41 self.exec_out = None 33 self.exec_out = None
42 self.exec_err = None 34 self.exec_err = None
35 self.tracer_types = tracer_types
43 36
44 self.task_batch = com.num_cpus() 37 def __setup_tracers(self):
45 38 tracers = [ t(self.working_dir) for t in self.tracer_types ]
46 self.__make_dirs()
47 self.__assign_executable_cwds()
48 self.__setup_tracers(tracer_types)
49
50
51 def __setup_tracers(self, tracer_types):
52 tracers = [ t(self.working_dir) for t in tracer_types ]
53 39
54 self.regular_tracers = [t for t in tracers if not t.is_exact()] 40 self.regular_tracers = [t for t in tracers if not t.is_exact()]
55 self.exact_tracers = [t for t in tracers if t.is_exact()] 41 self.exact_tracers = [t for t in tracers if t.is_exact()]
@@ -84,13 +70,11 @@ class Experiment(object):
84 map(assign_cwd, self.executables) 70 map(assign_cwd, self.executables)
85 71
86 def __kill_all(self): 72 def __kill_all(self):
87 # Give time for whatever failed to finish failing 73 if lu.waiting_tasks():
88 time.sleep(2) 74 released = lu.release_tasks()
89 75 self.log("Re-released %d tasks" % released)
90 released = lu.release_tasks()
91 self.log("Re-released %d tasks" % released)
92 76
93 time.sleep(5) 77 time.sleep(1)
94 78
95 self.log("Killing all tasks") 79 self.log("Killing all tasks")
96 for e in self.executables: 80 for e in self.executables:
@@ -99,7 +83,62 @@ class Experiment(object):
99 except: 83 except:
100 pass 84 pass
101 85
102 time.sleep(2) 86 time.sleep(1)
87
88 def __strip_path(self, path):
89 '''Shorten path to something more readable.'''
90 file_dir = os.path.split(self.working_dir)[0]
91 if path.index(file_dir) == 0:
92 path = path[len(file_dir):]
93
94 return path.strip("/")
95
96 def __check_tasks_status(self):
97 '''Raises an exception if any tasks have failed.'''
98 msgs = []
99
100 for e in self.executables:
101 status = e.poll()
102 if status != None and status:
103 err_msg = "Task %s failed with status: %s" % (e.wait(), status)
104 msgs += [err_msg]
105
106 if msgs:
107 # Show at most 3 messages so that every task failing doesn't blow
108 # up the terminal
109 if len(msgs) > 3:
110 num_errs = len(msgs) - 3
111 msgs = msgs[0:4] + ["...%d more task errors..." % num_errs]
112
113 out_name = self.__strip_path(self.exec_out.name)
114 err_name = self.__strip_path(self.exec_err.name)
115 help = "Check dmesg, %s, and %s" % (out_name, err_name)
116
117 raise Exception("\n".join(msgs + [help]))
118
119 def __wait_for_ready(self):
120 self.log("Sleeping until tasks are ready for release...")
121
122 wait_start = time.time()
123 num_ready = lu.waiting_tasks()
124
125 while num_ready < len(self.executables):
126 # Quit if too much time passes without a task becoming ready
127 if time.time() - wait_start > 180.0:
128 s = "waiting: %d, submitted: %d" %\
129 (lu.waiting_tasks(), len(self.executables))
130 raise Exception("Too much time spent waiting for tasks! %s" % s)
131
132 time.sleep(1)
133
134 # Quit if any tasks fail
135 self.__check_tasks_status()
136
137 # Reset the waiting time whenever more tasks become ready
138 now_ready = lu.waiting_tasks()
139 if now_ready != num_ready:
140 wait_start = time.time()
141 num_ready = lu.now_ready
103 142
104 def __run_tasks(self): 143 def __run_tasks(self):
105 self.log("Starting %d tasks" % len(self.executables)) 144 self.log("Starting %d tasks" % len(self.executables))
@@ -108,16 +147,9 @@ class Experiment(object):
108 try: 147 try:
109 e.execute() 148 e.execute()
110 except: 149 except:
111 raise Exception("Executable failed: %s" % e) 150 raise Exception("Executable failed to start: %s" % e)
112 151
113 self.log("Sleeping until tasks are ready for release...") 152 self.__wait_for_ready()
114 start = time.time()
115 while lu.waiting_tasks() < len(self.executables):
116 if time.time() - start > 30.0:
117 s = "waiting: %d, submitted: %d" %\
118 (lu.waiting_tasks(), len(self.executables))
119 raise Exception("Too much time spent waiting for tasks! %s" % s)
120 time.sleep(1)
121 153
122 # Exact tracers (like overheads) must be started right after release or 154 # Exact tracers (like overheads) must be started right after release or
123 # measurements will be full of irrelevant records 155 # measurements will be full of irrelevant records
@@ -148,60 +180,40 @@ class Experiment(object):
148 def __save_results(self): 180 def __save_results(self):
149 os.rename(self.working_dir, self.finished_dir) 181 os.rename(self.working_dir, self.finished_dir)
150 182
151 def log(self, msg): 183 def __to_linux(self):
152 print("[Exp %s]: %s" % (self.name, msg)) 184 msgs = []
153
154 def run_exp(self):
155 self.__check_system()
156
157 succ = False
158 try:
159 self.__setup()
160 185
186 sched = lu.scheduler()
187 if sched != "Linux":
161 try: 188 try:
162 self.__run_tasks() 189 lu.switch_scheduler("Linux")
163 self.log("Saving results in %s" % self.finished_dir)
164 succ = True
165 except: 190 except:
166 traceback.print_exc() 191 msgs += ["Scheduler is %s, cannot switch to Linux!" % sched]
167 self.__kill_all()
168 raise ExperimentFailed(self.name)
169 finally:
170 self.__teardown()
171 finally:
172 self.log("Switching to Linux scheduler")
173
174 # Give the tasks 10 seconds to finish before bailing
175 start = time.time()
176 while lu.all_tasks() > 0:
177 if time.time() - start < 10.0:
178 raise SystemCorrupted("%d tasks still running!" %
179 lu.all_tasks())
180
181 lu.switch_scheduler("Linux")
182
183 if succ:
184 self.__save_results()
185 self.log("Experiment done!")
186 192
187 def __check_system(self):
188 running = lu.all_tasks() 193 running = lu.all_tasks()
189 if running: 194 if running:
190 raise SystemCorrupted("%d tasks already running!" % running) 195 msgs += ["%d real-time tasks still running!" % running]
191 196
192 sched = lu.scheduler() 197 if msgs:
193 if sched != "Linux": 198 raise SystemCorrupted("\n".join(msgs))
194 raise SystemCorrupted("Scheduler is %s, should be Linux" % sched)
195 199
196 def __setup(self): 200 def __setup(self):
201 self.__make_dirs()
202 self.__assign_executable_cwds()
203 self.__setup_tracers()
204
197 self.log("Writing %d proc entries" % len(self.proc_entries)) 205 self.log("Writing %d proc entries" % len(self.proc_entries))
198 map(methodcaller('write_proc'), self.proc_entries) 206 map(methodcaller('write_proc'), self.proc_entries)
199 207
208 self.log("Starting %d regular tracers" % len(self.regular_tracers))
209 map(methodcaller('start_tracing'), self.regular_tracers)
210
211 time.sleep(1)
212
200 self.log("Switching to %s" % self.scheduler) 213 self.log("Switching to %s" % self.scheduler)
201 lu.switch_scheduler(self.scheduler) 214 lu.switch_scheduler(self.scheduler)
202 215
203 self.log("Starting %d regular tracers" % len(self.regular_tracers)) 216 time.sleep(1)
204 map(methodcaller('start_tracing'), self.regular_tracers)
205 217
206 self.exec_out = open('%s/exec-out.txt' % self.working_dir, 'w') 218 self.exec_out = open('%s/exec-out.txt' % self.working_dir, 'w')
207 self.exec_err = open('%s/exec-err.txt' % self.working_dir, 'w') 219 self.exec_err = open('%s/exec-err.txt' % self.working_dir, 'w')
@@ -217,3 +229,32 @@ class Experiment(object):
217 self.log("Stopping regular tracers") 229 self.log("Stopping regular tracers")
218 map(methodcaller('stop_tracing'), self.regular_tracers) 230 map(methodcaller('stop_tracing'), self.regular_tracers)
219 231
232 def log(self, msg):
233 print("[Exp %s]: %s" % (self.name, msg))
234
235 def run_exp(self):
236 self.__to_linux()
237
238 succ = False
239 try:
240 self.__setup()
241
242 try:
243 self.__run_tasks()
244 self.log("Saving results in %s" % self.finished_dir)
245 succ = True
246 except Exception as e:
247 # Give time for whatever failed to finish failing
248 time.sleep(2)
249 self.__kill_all()
250
251 raise e
252 finally:
253 self.__teardown()
254 finally:
255 self.log("Switching back to Linux scheduler")
256 self.__to_linux()
257
258 if succ:
259 self.__save_results()
260 self.log("Experiment done!")
diff --git a/run/litmus_util.py b/run/litmus_util.py
index 70da262..81f690b 100644
--- a/run/litmus_util.py
+++ b/run/litmus_util.py
@@ -20,7 +20,7 @@ def switch_scheduler(switch_to_in):
20 with open('/proc/litmus/active_plugin', 'w') as active_plugin: 20 with open('/proc/litmus/active_plugin', 'w') as active_plugin:
21 subprocess.Popen(["echo", switch_to], stdout=active_plugin) 21 subprocess.Popen(["echo", switch_to], stdout=active_plugin)
22 22
23 # it takes a bit to do the switch, sleep an arbitrary amount of time 23 # It takes a bit to do the switch, sleep an arbitrary amount of time
24 time.sleep(2) 24 time.sleep(2)
25 25
26 cur_plugin = scheduler() 26 cur_plugin = scheduler()
@@ -29,24 +29,21 @@ def switch_scheduler(switch_to_in):
29 (switch_to, cur_plugin)) 29 (switch_to, cur_plugin))
30 30
31def waiting_tasks(): 31def waiting_tasks():
32 reg = re.compile(r'^ready.*?(?P<READY>\d+)$', re.M) 32 reg = re.compile(r'^ready.*?(?P<WAITING>\d+)$', re.M)
33 with open('/proc/litmus/stats', 'r') as f: 33 with open('/proc/litmus/stats', 'r') as f:
34 data = f.read() 34 data = f.read()
35 35
36 # Ignore if no tasks are waiting for release 36 # Ignore if no tasks are waiting for release
37 match = re.search(reg, data) 37 waiting = re.search(reg, data).group("WAITING")
38 ready = match.group("READY")
39 38
40 return 0 if not ready else int(ready) 39 return 0 if not waiting else int(waiting)
41 40
42def all_tasks(): 41def all_tasks():
43 reg = re.compile(r'^real-time.*?(?P<TASKS>\d+)$', re.M) 42 reg = re.compile(r'^real-time.*?(?P<TASKS>\d+)$', re.M)
44 with open('/proc/litmus/stats', 'r') as f: 43 with open('/proc/litmus/stats', 'r') as f:
45 data = f.read() 44 data = f.read()
46 45
47 # Ignore if no tasks are waiting for release 46 ready = re.search(reg, data).group("TASKS")
48 match = re.search(reg, data)
49 ready = match.group("TASKS")
50 47
51 return 0 if not ready else int(ready) 48 return 0 if not ready else int(ready)
52 49
diff --git a/run/tracer.py b/run/tracer.py
index 6e1d05c..5e92a74 100644
--- a/run/tracer.py
+++ b/run/tracer.py
@@ -38,8 +38,8 @@ class LinuxTracer(Tracer):
38 stdout = open('%s/trace-cmd-stdout.txt' % self.output_dir, 'w') 38 stdout = open('%s/trace-cmd-stdout.txt' % self.output_dir, 'w')
39 stderr = open('%s/trace-cmd-stderr.txt' % self.output_dir, 'w') 39 stderr = open('%s/trace-cmd-stderr.txt' % self.output_dir, 'w')
40 40
41 execute = Executable(conf.BINS['trace-cmd'], extra_args, stdout, stderr) 41 execute = Executable(conf.BINS['trace-cmd'], extra_args,
42 execute.cwd = output_dir 42 stdout, stderr, output_dir)
43 self.bins.append(execute) 43 self.bins.append(execute)
44 44
45 @staticmethod 45 @staticmethod
diff --git a/run_exps.py b/run_exps.py
index 6531415..a15018d 100755
--- a/run_exps.py
+++ b/run_exps.py
@@ -2,22 +2,23 @@
2from __future__ import print_function 2from __future__ import print_function
3 3
4import common as com 4import common as com
5import config.config as conf
6import os 5import os
7import re 6import re
8import shutil 7import shutil
9import sys 8import sys
10import run.tracer as trace 9import run.tracer as trace
11 10
11from config.config import PARAMS,DEFAULTS
12from collections import namedtuple 12from collections import namedtuple
13from optparse import OptionParser 13from optparse import OptionParser
14from run.executable.executable import Executable 14from run.executable.executable import Executable
15from run.experiment import Experiment,ExperimentDone,ExperimentFailed,SystemCorrupted 15from run.experiment import Experiment,ExperimentDone,SystemCorrupted
16from run.proc_entry import ProcEntry 16from run.proc_entry import ProcEntry
17 17
18'''Customizable experiment parameters''' 18'''Customizable experiment parameters'''
19ExpParams = namedtuple('ExpParams', ['scheduler', 'duration', 'tracers', 19ExpParams = namedtuple('ExpParams', ['scheduler', 'duration', 'tracers',
20 'kernel', 'config_options']) 20 'kernel', 'config_options', 'file_params',
21 'pre_script', 'post_script'])
21'''Comparison of requested versus actual kernel compile parameter value''' 22'''Comparison of requested versus actual kernel compile parameter value'''
22ConfigResult = namedtuple('ConfigResult', ['param', 'wanted', 'actual']) 23ConfigResult = namedtuple('ConfigResult', ['param', 'wanted', 'actual'])
23 24
@@ -56,12 +57,13 @@ def parse_args():
56 parser.add_option('-d', '--duration', dest='duration', type='int', 57 parser.add_option('-d', '--duration', dest='duration', type='int',
57 help='duration (seconds) of tasks') 58 help='duration (seconds) of tasks')
58 parser.add_option('-o', '--out-dir', dest='out_dir', 59 parser.add_option('-o', '--out-dir', dest='out_dir',
59 help='directory for data output', default=("%s/run-data"%os.getcwd())) 60 help='directory for data output',
61 default=DEFAULTS['out-run'])
60 parser.add_option('-p', '--params', dest='param_file', 62 parser.add_option('-p', '--params', dest='param_file',
61 help='file with experiment parameters') 63 help='file with experiment parameters')
62 parser.add_option('-c', '--schedule-file', dest='sched_file', 64 parser.add_option('-c', '--schedule-file', dest='sched_file',
63 help='name of schedule files within directories', 65 help='name of schedule files within directories',
64 default=conf.DEFAULTS['sched_file']) 66 default=DEFAULTS['sched_file'])
65 parser.add_option('-f', '--force', action='store_true', default=False, 67 parser.add_option('-f', '--force', action='store_true', default=False,
66 dest='force', help='overwrite existing data') 68 dest='force', help='overwrite existing data')
67 parser.add_option('-j', '--jabber', metavar='username@domain', 69 parser.add_option('-j', '--jabber', metavar='username@domain',
@@ -96,7 +98,7 @@ def convert_data(data):
96 proc = (loc, match.group("CONTENT")) 98 proc = (loc, match.group("CONTENT"))
97 procs.append(proc) 99 procs.append(proc)
98 else: 100 else:
99 prog = match.group("PROG") or conf.DEFAULTS['prog'] 101 prog = match.group("PROG") or DEFAULTS['prog']
100 spin = (prog, match.group("ARGS")) 102 spin = (prog, match.group("ARGS"))
101 tasks.append(spin) 103 tasks.append(spin)
102 104
@@ -112,8 +114,8 @@ def fix_paths(schedule, exp_dir, sched_file):
112 args = args.replace(arg, abspath) 114 args = args.replace(arg, abspath)
113 break 115 break
114 elif re.match(r'.*\w+\.[a-zA-Z]\w*', arg): 116 elif re.match(r'.*\w+\.[a-zA-Z]\w*', arg):
115 print("WARNING: non-existent file '%s' may be referenced:\n\t%s" 117 sys.stderr.write("WARNING: non-existent file '%s' " % arg +
116 % (arg, sched_file)) 118 "may be referenced:\n\t%s" % sched_file)
117 119
118 schedule['task'][idx] = (task, args) 120 schedule['task'][idx] = (task, args)
119 121
@@ -181,25 +183,23 @@ def verify_environment(exp_params):
181 raise InvalidConfig(results) 183 raise InvalidConfig(results)
182 184
183 185
184def run_parameter(exp_dir, out_dir, params, param_name): 186def run_script(script_params, exp, exp_dir, out_dir):
185 '''Run an executable (arguments optional) specified as a configurable 187 '''Run an executable (arguments optional)'''
186 @param_name in @params.''' 188 if not script_params:
187 if conf.PARAMS[param_name] not in params:
188 return 189 return
189 190
190 script_params = params[conf.PARAMS[param_name]]
191
192 # Split into arguments and program name 191 # Split into arguments and program name
193 if type(script_params) != type([]): 192 if type(script_params) != type([]):
194 script_params = [script_params] 193 script_params = [script_params]
195 script_name = script_params.pop(0)
196 194
195 exp.log("Running %s" % script_params.join(" "))
196
197 script_name = script_params.pop(0)
197 script = com.get_executable(script_name, cwd=exp_dir) 198 script = com.get_executable(script_name, cwd=exp_dir)
198 199
199 out = open('%s/%s-out.txt' % (out_dir, param_name), 'w') 200 out = open('%s/%s-out.txt' % (out_dir, script_name), 'w')
200 prog = Executable(script, script_params, 201 prog = Executable(script, script_params, cwd=out_dir,
201 stderr_file=out, stdout_file=out) 202 stderr_file=out, stdout_file=out)
202 prog.cwd = out_dir
203 203
204 prog.execute() 204 prog.execute()
205 prog.wait() 205 prog.wait()
@@ -207,28 +207,41 @@ def run_parameter(exp_dir, out_dir, params, param_name):
207 out.close() 207 out.close()
208 208
209 209
210def get_exp_params(cmd_scheduler, cmd_duration, file_params): 210def make_exp_params(cmd_scheduler, cmd_duration, sched_dir, param_file):
211 '''Return ExpParam with configured values of all hardcoded params.''' 211 '''Return ExpParam with configured values of all hardcoded params.'''
212 kernel = copts = "" 212 kernel = copts = ""
213 213
214 scheduler = cmd_scheduler or file_params[conf.PARAMS['sched']] 214 # Load parameter file
215 duration = cmd_duration or file_params[conf.PARAMS['dur']] or\ 215 param_file = param_file or "%s/%s" % (sched_dir, DEFAULTS['params_file'])
216 conf.DEFAULTS['duration'] 216 if os.path.isfile(param_file):
217 fparams = com.load_params(param_file)
218 else:
219 fparams = {}
220
221 scheduler = cmd_scheduler or fparams[PARAMS['sched']]
222 duration = cmd_duration or fparams[PARAMS['dur']] or\
223 DEFAULTS['duration']
217 224
218 # Experiments can specify required kernel name 225 # Experiments can specify required kernel name
219 if conf.PARAMS['kernel'] in file_params: 226 if PARAMS['kernel'] in fparams:
220 kernel = file_params[conf.PARAMS['kernel']] 227 kernel = fparams[PARAMS['kernel']]
221 228
222 # Or required config options 229 # Or required config options
223 if conf.PARAMS['copts'] in file_params: 230 if PARAMS['copts'] in fparams:
224 copts = file_params[conf.PARAMS['copts']] 231 copts = fparams[PARAMS['copts']]
225 232
226 # Or required tracers 233 # Or required tracers
227 requested = [] 234 requested = []
228 if conf.PARAMS['trace'] in file_params: 235 if PARAMS['trace'] in fparams:
229 requested = file_params[conf.PARAMS['trace']] 236 requested = fparams[PARAMS['trace']]
230 tracers = trace.get_tracer_types(requested) 237 tracers = trace.get_tracer_types(requested)
231 238
239 # Or scripts to run before and after experiments
240 def get_script(name):
241 return fparams[name] if name in fparams else None
242 pre_script = get_script('pre')
243 post_script = get_script('post')
244
232 # But only these two are mandatory 245 # But only these two are mandatory
233 if not scheduler: 246 if not scheduler:
234 raise IOError("No scheduler found in param file!") 247 raise IOError("No scheduler found in param file!")
@@ -236,61 +249,70 @@ def get_exp_params(cmd_scheduler, cmd_duration, file_params):
236 raise IOError("No duration found in param file!") 249 raise IOError("No duration found in param file!")
237 250
238 return ExpParams(scheduler=scheduler, kernel=kernel, duration=duration, 251 return ExpParams(scheduler=scheduler, kernel=kernel, duration=duration,
239 config_options=copts, tracers=tracers) 252 config_options=copts, tracers=tracers, file_params=fparams,
253 pre_script=pre_script, post_script=post_script)
240 254
241 255def run_experiment(name, sched_file, exp_params, out_dir,
242def load_experiment(sched_file, cmd_scheduler, cmd_duration, 256 start_message, ignore, jabber):
243 param_file, out_dir, ignore, jabber):
244 '''Load and parse data from files and run result.''' 257 '''Load and parse data from files and run result.'''
245 if not os.path.isfile(sched_file): 258 if not os.path.isfile(sched_file):
246 raise IOError("Cannot find schedule file: %s" % sched_file) 259 raise IOError("Cannot find schedule file: %s" % sched_file)
247 260
248 dir_name, fname = os.path.split(sched_file) 261 dir_name, fname = os.path.split(sched_file)
249 exp_name = os.path.split(dir_name)[1] + "/" + fname
250 work_dir = "%s/tmp" % dir_name 262 work_dir = "%s/tmp" % dir_name
251 263
252 # Load parameter file 264 procs, execs = load_schedule(name, sched_file, exp_params.duration)
253 param_file = param_file or \
254 "%s/%s" % (dir_name, conf.DEFAULTS['params_file'])
255 if os.path.isfile(param_file):
256 file_params = com.load_params(param_file)
257 else:
258 file_params = {}
259
260 # Create input needed by Experiment
261 exp_params = get_exp_params(cmd_scheduler, cmd_duration, file_params)
262 procs, execs = load_schedule(exp_name, sched_file, exp_params.duration)
263 265
264 exp = Experiment(exp_name, exp_params.scheduler, work_dir, out_dir, 266 exp = Experiment(name, exp_params.scheduler, work_dir, out_dir,
265 procs, execs, exp_params.tracers) 267 procs, execs, exp_params.tracers)
266 268
269 exp.log(start_message)
270
267 if not ignore: 271 if not ignore:
268 verify_environment(exp_params) 272 verify_environment(exp_params)
269 273
270 run_parameter(dir_name, work_dir, file_params, 'pre') 274 run_script(exp_params.pre_script, exp, dir_name, work_dir)
271 275
272 exp.run_exp() 276 exp.run_exp()
273 277
274 run_parameter(dir_name, out_dir, file_params, 'post') 278 run_script(exp_params.post_script, exp, dir_name, out_dir)
275 279
276 if jabber: 280 if jabber:
277 jabber.send("Completed '%s'" % exp_name) 281 jabber.send("Completed '%s'" % name)
278 282
279 # Save parameters used to run experiment in out_dir 283 # Save parameters used to run experiment in out_dir
280 out_params = dict(file_params.items() + 284 out_params = dict(exp_params.file_params.items() +
281 [(conf.PARAMS['sched'], exp_params.scheduler), 285 [(PARAMS['sched'], exp_params.scheduler),
282 (conf.PARAMS['tasks'], len(execs)), 286 (PARAMS['tasks'], len(execs)),
283 (conf.PARAMS['dur'], exp_params.duration)]) 287 (PARAMS['dur'], exp_params.duration)])
284 288
285 # Feather-trace clock frequency saved for accurate overhead parsing 289 # Feather-trace clock frequency saved for accurate overhead parsing
286 ft_freq = com.ft_freq() 290 ft_freq = com.ft_freq()
287 if ft_freq: 291 if ft_freq:
288 out_params[conf.PARAMS['cycles']] = ft_freq 292 out_params[PARAMS['cycles']] = ft_freq
289 293
290 with open("%s/%s" % (out_dir, conf.DEFAULTS['params_file']), 'w') as f: 294 with open("%s/%s" % (out_dir, DEFAULTS['params_file']), 'w') as f:
291 f.write(str(out_params)) 295 f.write(str(out_params))
292 296
293 297
298def get_exps(opts, args):
299 '''Return list of experiment files or directories'''
300 if args:
301 return args
302
303 # Default to sched_file > generated dirs
304 if os.path.exists(opts.sched_file):
305 sys.stderr.write("Reading schedule from %s.\n" % opts.sched_file)
306 return [opts.sched_file]
307 elif os.path.exists(DEFAULTS['out-gen']):
308 sys.stderr.write("Reading schedules from %s/*.\n" % DEFAULTS['out-gen'])
309 sched_dirs = os.listdir(DEFAULTS['out-gen'])
310 return ['%s/%s' % (DEFAULTS['out-gen'], d) for d in sched_dirs]
311 else:
312 sys.stderr.write("Run with -h to view options.\n");
313 sys.exit(1)
314
315
294def setup_jabber(target): 316def setup_jabber(target):
295 try: 317 try:
296 from run.jabber import Jabber 318 from run.jabber import Jabber
@@ -301,6 +323,7 @@ def setup_jabber(target):
301 "Disabling instant messages.\n") 323 "Disabling instant messages.\n")
302 return None 324 return None
303 325
326
304def setup_email(target): 327def setup_email(target):
305 try: 328 try:
306 from run.emailer import Emailer 329 from run.emailer import Emailer
@@ -314,71 +337,82 @@ def setup_email(target):
314 sys.stderr.write(message + " Disabling email message.\n") 337 sys.stderr.write(message + " Disabling email message.\n")
315 return None 338 return None
316 339
340
341def make_paths(exp, out_base_dir, opts):
342 '''Translate experiment name to (schedule file, output directory) paths'''
343 path = "%s/%s" % (os.getcwd(), exp)
344 out_dir = "%s/%s" % (out_base_dir, os.path.split(exp.strip('/'))[1])
345
346 if not os.path.exists(path):
347 raise IOError("Invalid experiment: %s" % path)
348
349 if opts.force and os.path.exists(out_dir):
350 shutil.rmtree(out_dir)
351
352 if os.path.isdir(path):
353 sched_file = "%s/%s" % (path, opts.sched_file)
354 else:
355 sched_file = path
356
357 return sched_file, out_dir
358
359
317def main(): 360def main():
318 opts, args = parse_args() 361 opts, args = parse_args()
319 362
320 scheduler = opts.scheduler 363 exps = get_exps(opts, args)
321 duration = opts.duration
322 param_file = opts.param_file
323 out_base = os.path.abspath(opts.out_dir)
324 364
325 args = args or [opts.sched_file] 365 jabber = setup_jabber(opts.jabber) if opts.jabber else None
366 email = setup_email(opts.email) if opts.email else None
326 367
327 created = False 368 out_base = os.path.abspath(opts.out_dir)
369 created = False
328 if not os.path.exists(out_base): 370 if not os.path.exists(out_base):
329 created = True 371 created = True
330 os.mkdir(out_base) 372 os.mkdir(out_base)
331 373
332 ran = 0 374 ran = done = succ = failed = invalid = 0
333 done = 0
334 succ = 0
335 failed = 0
336 invalid = 0
337
338 jabber = setup_jabber(opts.jabber) if opts.jabber else None
339 email = setup_email(opts.email) if opts.email else None
340
341 for exp in args:
342 path = "%s/%s" % (os.getcwd(), exp)
343 out_dir = "%s/%s" % (out_base, os.path.split(exp.strip('/'))[1])
344 375
345 if not os.path.exists(path): 376 for i, exp in enumerate(exps):
346 raise IOError("Invalid experiment: %s" % path) 377 sched_file, out_dir = make_paths(exp, out_base, opts)
378 sched_dir = os.path.split(sched_file)[0]
347 379
348 if opts.force and os.path.exists(out_dir): 380 try:
349 shutil.rmtree(out_dir) 381 start_message = "Loading experiment %d of %d." % (i+1, len(exps))
382 exp_params = make_exp_params(opts.scheduler, opts.duration,
383 sched_dir, opts.param_file)
350 384
351 if os.path.isdir(exp): 385 run_experiment(exp, sched_file, exp_params, out_dir,
352 path = "%s/%s" % (path, opts.sched_file) 386 start_message, opts.ignore, jabber)
353 387
354 try:
355 load_experiment(path, scheduler, duration, param_file,
356 out_dir, opts.ignore, jabber)
357 succ += 1 388 succ += 1
358 except ExperimentDone: 389 except ExperimentDone:
390 sys.stderr.write("Experiment '%s' already completed " % exp +
391 "at '%s'\n" % out_base)
359 done += 1 392 done += 1
360 print("Experiment '%s' already completed at '%s'" % (exp, out_base))
361 except (InvalidKernel, InvalidConfig) as e: 393 except (InvalidKernel, InvalidConfig) as e:
394 sys.stderr.write("Invalid environment for experiment '%s'\n" % exp)
395 sys.stderr.write("%s\n" % e)
362 invalid += 1 396 invalid += 1
363 print("Invalid environment for experiment '%s'" % exp)
364 print(e)
365 except KeyboardInterrupt: 397 except KeyboardInterrupt:
366 print("Keyboard interrupt, quitting") 398 sys.stderr.write("Keyboard interrupt, quitting\n")
367 break 399 break
368 except SystemCorrupted as e: 400 except SystemCorrupted as e:
369 print("System is corrupted! Fix state before continuing.") 401 sys.stderr.write("System is corrupted! Fix state before continuing.\n")
370 print(e) 402 sys.stderr.write("%s\n" % e)
371 break 403 break
372 except ExperimentFailed: 404 except Exception as e:
373 print("Failed experiment %s" % exp) 405 sys.stderr.write("Failed experiment %s\n" % exp)
406 sys.stderr.write("%s\n" % e)
374 failed += 1 407 failed += 1
375 408
376 ran += 1 409 ran += 1
377 410
411 # Clean out directory if it failed immediately
378 if not os.listdir(out_base) and created and not succ: 412 if not os.listdir(out_base) and created and not succ:
379 os.rmdir(out_base) 413 os.rmdir(out_base)
380 414
381 message = "Experiments ran:\t%d of %d" % (ran, len(args)) +\ 415 message = "Experiments ran:\t%d of %d" % (ran, len(exps)) +\
382 "\n Successful:\t\t%d" % succ +\ 416 "\n Successful:\t\t%d" % succ +\
383 "\n Failed:\t\t%d" % failed +\ 417 "\n Failed:\t\t%d" % failed +\
384 "\n Already Done:\t\t%d" % done +\ 418 "\n Already Done:\t\t%d" % done +\
@@ -386,6 +420,10 @@ def main():
386 420
387 print(message) 421 print(message)
388 422
423 if succ:
424 sys.stderr.write("Successful experiment data saved in %s.\n" %
425 opts.out_dir)
426
389 if email: 427 if email:
390 email.send(message) 428 email.send(message)
391 email.close() 429 email.close()