summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorGary Bressler <garybressler@nc.rr.com>2010-04-06 12:50:06 -0400
committerGary Bressler <garybressler@nc.rr.com>2010-04-06 12:50:06 -0400
commit38c18a7992a59774bfc281348c718c5f7db4c557 (patch)
treef81e160c8057b69ab5b6129ca336df5803397eb8
parentc7e3aaebdba7bf880534abd91a383b5543cf0be4 (diff)
parentaeb6d3b598528649e70bfa224a24136f677abf00 (diff)
Merge branch 'master' of ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/unit-trace into wip-gary
Conflicts: unit_trace/viz/draw.py unit_trace/viz/schedule.py unit_trace/viz/viewer.py
-rw-r--r--README105
-rw-r--r--TODO6
-rw-r--r--doc/LICENSE22
-rw-r--r--doc/header.html4
-rw-r--r--doc/index.txt208
-rwxr-xr-xinstall.py21
-rwxr-xr-xruntests.py47
-rwxr-xr-xunit-trace40
-rw-r--r--unit_trace/earliest.py5
-rw-r--r--unit_trace/gedf_inversion_stat_printer.py82
-rw-r--r--unit_trace/gedf_test.py130
-rw-r--r--unit_trace/latest.py2
-rw-r--r--unit_trace/naive_trace_reader.py165
-rw-r--r--unit_trace/progress.py2
-rw-r--r--unit_trace/sanitizer.py2
-rw-r--r--unit_trace/stats.py39
-rw-r--r--unit_trace/stdout_printer.py29
-rw-r--r--unit_trace/trace_reader.py58
-rw-r--r--unit_trace/viz/draw.py1377
19 files changed, 1815 insertions, 529 deletions
diff --git a/README b/README
deleted file mode 100644
index 4c1b772..0000000
--- a/README
+++ /dev/null
@@ -1,105 +0,0 @@
1See the LITMUS Wiki page for a general explanation of this tool.
2
3unit_trace consists of two modules and a core. The ``core'' is basically
4a bunch of code, implemented as Python iterators, which converts the
5raw trace data into a sequence of record objects, implemented in
6Python. The modules are:
7
81) A simple module that outputs the contents of each record to
9stdout. This module, along with most of the core, can be found in the
10reader/ directory. There is a sample script -- look at
11sample_script.py in the reader/ directory (it's pretty
12self-explanatory). Note that Mac is the one who coded most of the
13this, though I can probably try to answer any questions about it since
14I've had to go in there from time to time.
15
162) The visualizer. Now, the GUI as it stands is very basic -- it's
17basically just a shell for the core visualizer component. How to open
18a file is obvious -- but note that you can open several files at a
19time (since more often than not a trace consists of more than one
20file, typically one for each CPU).
21
22Most of the code for this is in the viz/ directory, but to run it, the
23file you want to execute is visualizer.py (in the main directory).
24
25A few notes on how to use the GUI:
26
27-- How to scroll is pretty obvious, though I still need to implement
28 keypresses (very trivial, but when making a GUI component from
29 scratch it always seems like there are a million little things that
30 you need to do :)
31
32-- You can view either by task or by CPU; click the tabs at the top.
33
34-- Mousing over the items (not the axes, though, since those are
35 pretty self-explanatory) gives you information about the item that
36 you moused over, displayed at the bottom.
37
38-- You can select items. You can click them individually, one at a
39 time, or you can drag or ctrl-click to select multiple.
40
41-- What you have selected is independent of what mode (task or CPU)
42 you are operating in. So if you are curious, say, when a certain
43 job is running compared to other jobs on the same CPU, you can
44 click a job in task mode and then switch to CPU mode, and it will
45 remain selected.
46
47-- Right-click to get a menu of all the items you have selected (in
48 the future this menu will be clickable, so that you can get the
49 information about an item in its own window).
50
51-- It is a bit laggy when lots of stuff is on the screen at once. This
52 should be fairly easy to optimize, if I have correctly identified
53 the problem, but it's not a huge issue (it's not _that_ slow).
54
55But wait, there's more:
56
57-- As of now unit-trace has no way to determine the algorithm that was
58 used on the trace you're loading. This is important since certain
59 sections of code work only with G-EDF in particular. The point of
60 having this special code is either to filter out bogus data or to
61 generate extra information about the schedule (e.g. priority
62 inversions). Of course, you can leave these extra steps out and
63 it will still work, but you might get extra ``bogus'' information
64 generated by the tracer or you might not get all the information
65 you want.
66
67-- To add or remove these extra steps, take a look at visualizer.py
68 and sample_script.py. You will see some code like this:
69
70 stream = reader.trace_reader.trace_reader(file_list)
71 #stream = reader.sanitizer.sanitizer(stream)
72 #stream = reader.gedf_test.gedf_test(stream)
73
74 Uncommenting those lines will run the extra steps in the pipeline.
75 The sanitizer filters out some bogus data (stuff like ``pid 0''),
76 but so far it's only been coded for a select number of traces.
77 gedf_test generates extra information about G-EDF schedules
78 (the traces that are named st-g?-?.bin). If you try to run
79 gedf_test on anything else, it will most likely fail.
80
81-- What traces are you going to use? Well, you will probably want to
82 use your own, but there are some samples you can try in the traces/
83 directory (a couple of them do give error messages, however).
84
85How to install:
86
87You should type
88
89git clone -b wip-gary ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/unit-trace.git
90
91to check out the repository. Note that you shouldn't check out the
92master branch, as it's pretty outdated. It's all Python so far, so no
93compiling or anything like that is necessary.
94
95Requirements:
96
97You're going to need Python 2.5 to run this. You'll also need to
98install the pycairo and pygtk libraries. If anyone has questions about
99how to do this or what these are, ask me.
100
101Miscellanies:
102
103Of course, let me know if you find any bugs (I'm sure there are
104plenty, though, since this is fairly alpha software), if you're
105unable to run it, or if you have any questions. \ No newline at end of file
diff --git a/TODO b/TODO
index 9a50974..e69de29 100644
--- a/TODO
+++ b/TODO
@@ -1,6 +0,0 @@
1- Mac
2 - Bug report: If one trace file is empty, reader screws up.
3- Gary
4 - Allow interval to be specified for visualizer
5 - Only pull records on demand, if possible
6 - Bug report: per CPU view button does not exist in Mac's window manager(s)
diff --git a/doc/LICENSE b/doc/LICENSE
new file mode 100644
index 0000000..30a9810
--- /dev/null
+++ b/doc/LICENSE
@@ -0,0 +1,22 @@
1Copyright (c) 2010, UNC Real-Time Systems Group
2All rights reserved.
3
4Redistribution and use in source and binary forms, with or without
5modification, are permitted provided that the following conditions are met:
6
7 * Redistributions of source code must retain the above copyright notice,
8 this list of conditions and the following disclaimer.
9 * Redistributions in binary form must reproduce the above copyright notice,
10 this list of conditions and the following disclaimer in the documentation
11 and/or other materials provided with the distribution.
12
13THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
14ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
15WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
16DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
17FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
18DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
19SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
20CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
21OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
22OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/doc/header.html b/doc/header.html
index 58dc0da..b53a7a3 100644
--- a/doc/header.html
+++ b/doc/header.html
@@ -23,6 +23,10 @@
23 margin-top: 50px; 23 margin-top: 50px;
24 text-align: center; 24 text-align: center;
25 } 25 }
26 h3 {
27 margin-top: 30px;
28 text-align: center;
29 }
26 codeblock { 30 codeblock {
27 padding: 0px 15px; 31 padding: 0px 15px;
28 margin: 5px 0 15px; 32 margin: 5px 0 15px;
diff --git a/doc/index.txt b/doc/index.txt
index 78551c9..9360e40 100644
--- a/doc/index.txt
+++ b/doc/index.txt
@@ -10,103 +10,157 @@ information about scheduler behavior.
10</span> 10</span>
11 11
12## About This Document ## 12## About This Document ##
13This document contains instructions for people who want to use and/or contribute to Unit-Trace. 13This document is both the offical Unit-Trace website and the complete Unit-Trace documentation.
14It is the complete documentation for Unit-Trace.
15 14
16## Architecture ## 15## Obtaining Unit-Trace ##
17Before trying to use Unit-Trace, it will help to understand the architecture of Unit-Trace. 16The latest public release of Unit-Trace (currently 2010.1) is available at:<br>
17[http://cs.unc.edu/~mollison/unit-trace/unit-trace.tar.gz][release]
18 18
19Unit-Trace performs various options on **trace files**. Oftentimes, when scheduler tracing takes 19Members of UNC's Real-Time Group should obtain Unit-Trace using:<br>
20place, multiple trace files are generated for each experiment (e.g. one per CPU). We call a 20<codeblock>git clone ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/unit-trace.git</codeblock>
21related group of trace files a **trace**.
22 21
23The user interacts with the tool using `unit-trace`, a Python script which is to be installed 22## Installing Unit-Trace ##
24on the local executable path. 23Dependencies: Python 2.6; for the visualizer, pygtk and pycairo.
25`unit-trace` invokes operations provided by the `unit_trace` Python module, which is installed
26in the local `site_packages` directory (the default install location for Python modules).
27 24
28The `unit_trace` module provides submodule(s) for parsing trace files into a **stream** of Python objects (which we call **records**). 25Unit-Trace consists of a Python module called `unit_trace` (encapsulated in the `unit_trace` directory) and a font-end script called `unit-trace`.
29This stream is then passed from one `unit_trace` submodule to the next (like a pipe), undergoing
30various transformations in the process, and ultimately arriving at one or more output submodules.
31Intermediate modules generally add records; for example, the `gedf_test` module adds records to indicate
32priority inversions, which can be treated appropriately by output submodules.
33 26
34This architecture provides a clean and easy-to-use interface for both users and contributors. 27You can use `install.py` to install Unit-Trace, or install manually by copying the `unit-trace` script and the `unit_trace` directory
35The stream is implemented using Python iterators. 28to `~/bin`.
36This allows records are evaluated lazily (i.e. on an as-needed basis), making it possible to pull into memory only
37the records that are needed, and only as long as they are needed.
38This is important for dealing with very large traces.
39 29
40## Obtaining Unit-Trace ## 30Make sure `~/bin` is on your `PATH`.
41Members of UNC's Real-Time Group can obtain Unit-Trace using:
42<codeblock>git clone ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/unit-trace.git</codeblock>
43 31
44## Installing Unit-Trace ## 32## Using Unit-Trace ##
45Unit-Trace is based on Python 2.6, so make sure that is available on your system. 33Command line usage:<br>
34<codeblock>unit-trace &lt;one or more trace files&gt; [flags]</codeblock>.
46 35
47Unit-Trace can be installed manually by copying the `unit-trace` script and `unit_trace` Python module, as described previously. 36Each flag turns on or off a unit-trace submodule. The available submodules are
48Alternatively, you can use `sudo install.py` to (re)install Unit-Trace. 37given below.
38
39You can specify module flags in any order.
40
41For a quick usage reference (including a list of modules), type `unit-trace` on the command line, without any arguments.
42
43### Example Use Case ###
44Let's assume you're in a directory with a bunch of trace files with
45the extension `.bin`.
46Each trace file is assumed to be the trace of a single CPU, and all trace files in the directory are from the same experimental run.
47(The sample_traces directory, included with Unit Trace, will work for this example.)
48
49Suppose you want to get a list of the 10 longest priority inversions in a LITMUS<sup>RT</sup>trace.
50Use the following command:<br>
51<codeblock>unit-trace *.bin -c -g -i 10</codeblock>.
52
53Now, suppose you want to visualize one of those priority inversions.
54Given in the output for each one are the event IDs at the beginning and end of the priority inversion.
55Use the following command:<br>
56<codeblock>unit-trace *.bin -e &lt;the first event ID&gt; -l &lt;the second event ID&gt; -v</codeblock>.
57
58Note that if the visualizer stops at the second specified event (which it will), any tasks running at that point will appear to
59keep running forever. If you specify a slightly later second event ID (e.g. 100 greater than the actual one), this won't affect
60the jobs you're actually interested in.
61
62Now, suppose you want to see specific textual output for all events. (You could also specify a range if you wanted to.)<br>
63<codeblock>unit-trace *.bin -o > output</codeblock>
64
65This example provides a basic overview of what you can do with Unit-Trace. Detailed information about all available submodules is provided in
66the next section.
67
68## List of Submodules ##
69
70There are five basic kinds of submodules.
71
72- Input submodules, which read trace files
73- Filter submodules, which filter out event records
74- Test submodules, which perform some kind of test
75- Output modules, which display the results
76- Miscellaneous
77
78All submodules are listed and summarized in the tables below.
79Some submodules have further documentation, appearing later in this document.
49 80
50## Using Unit-Trace ##
51Type `unit-trace` (without options) to view usage information.
52
53In summary, trace files must be specified.
54Flags are used to enable any desired submodules.
55Some flags have accompanying parameter(s) that are passed to the submodule.
56The order in which submodules process records is pre-determined by the `unit-trace` script,
57so the user can specify flags on the command line in any order.
58
59## Example/Use Case ##
60Suppose that Alice wants to perform G-EDF testing on the LITMUS<sup>RT</sup> traces included in the sample_traces/ folder.
61
62The LITMUS<sup>RT</sup> tracing mechanism outputs superfluous events at the beginning of the trace that will not appear "correct" for
63G-EDF testing (for example, multiple job releases that never complete, indicating the initialization of a task).
64Alice uses the following command to print out (-o) the first 50 records (-m 50), looking for the end of the bogus records:
65<codeblock>unit-trace -m 50 -o *.bin</codeblock>.
66
67Seeing that she hasn't yet reached useful records, she uses the following command to print out (-o) 50 records (-m 50), skipping the
68first 50 (-s 50).
69<codeblock>unit-trace -m 50 -s 50 -o *.bin</codeblock>.
70She is able to see that meaningful releases begin at time `37917282934190`.
71
72She now commences G-EDF testing (-g), starting at the time of interest (-e <earliest time>).
73Because of the lengthy output to be expected, she redirects to standard out.
74She also uses the -c option to clean up additional records that are known to be erroneous, and
75will break the G-EDF tester.
76<codeblock>unit-trace -c -e 37917282934190 -g -o *.bin > output</codeblock>.
77
78OK, everything worked. Alice can now grep through the output file and see priority inversions.
79She sees a particularly long priority inversion, and decides to generate a visualization (-v) of part of the schedule.
80<codeblock>unit-trace -c -e 37918340000000 -l 37919000000000 -v *.bin</codeblock>.
81(NOTE: Currently, this still shows the entire schedule, which likely won't be feasible for larger traces and is a bug.)
82
83## Submodules ##
84### Input Submodules ### 81### Input Submodules ###
85<table border=1> 82<table border=1>
86<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr> 83<tr><td>Name</td><td>Flag</td><td>Parameters</td><td>Description</td></tr>
87<tr><td>trace_parser</td><td>(on by default)</td><td>(None)</td><td>Parses LITMUS<sup>RT</sup> traces</td></tr> 84<tr>
85<td>trace_parser</td>
86<td>always on, unless/until modules for other trace formats are contributed</td>
87<td>(None)</td><td>Parses LITMUS<sup>RT</sup> traces</td></tr>
88</table> 88</table>
89### Intermediate Submodules ### 89### Filter Submodules ###
90<table border=1> 90<table border=1>
91<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr> 91<tr><td>Name</td><td>Flag</td><td>Parameters</td><td>Description</td></tr>
92<tr><td>earliest</td><td>-e</td><td>time</td><td>Filters out records before the given time</td></tr> 92<tr><td>earliest</td><td>-e</td><td>time</td><td>Filters out records before the given event ID. (Event IDs are assigned in order of event record timestamp, and are displayed by the `stdio_printer` submodule.)</td></tr>
93<tr><td>latest</td><td>-l</td><td>time</td><td>Filters out records after the given time</td></tr> 93<tr><td>latest</td><td>-l</td><td>time</td><td>Filters out records after the given event ID.</td></tr>
94<tr><td>skipper</td><td>-s</td><td>number n</td><td>Skips the first n records</td></tr> 94<tr><td>skipper</td><td>-s</td><td>number n</td><td>Skips the first n records</td></tr>
95<tr><td>maxer</td><td>-m</td><td>number n</td><td>Allows at most n records to be parsed</td></tr> 95<tr><td>maxer</td><td>-m</td><td>number n</td><td>Allows at most n records to be parsed</td></tr>
96<tr><td>sanitizer</td><td>-c</td><td>(None)</td><td>Cleans up LITMUS<sup>RT</sup> traces for G-EDF testing.</td></tr> 96<tr><td>sanitizer</td><td>-c</td><td>(None)</td><td>Modifies LITMUS<sup>RT</sup> traces. To be used in conjunction with the G-EDF tester. To summarize, LITMUS<sup>RT</sup> traces have some bogus records that need to be removed or altered in order for a (potentially) valid schedule to be represented.</td></tr>
97<tr><td>progress</td><td>-p</td><td>(None)</td><td>Outputs progress info (e.g number of records parsed so far, total time to process trace) to std error.</td></tr> 97</table>
98<tr><td>stats</td><td>-i</td><td>(None)</td><td>Outputs statistics about G-EDF inversions. To be deprecated (incorporated into G-EDF tester).</td></tr> 98### Test Submodules ###
99<table border=1>
100<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr>
99<tr><td>gedf_test</td><td>-g</td><td>(None)</td><td>Performs G-EDF testing.</td></tr> 101<tr><td>gedf_test</td><td>-g</td><td>(None)</td><td>Performs G-EDF testing.</td></tr>
100</table> 102</table>
101### Output Submodules ### 103### Output Submodules ###
102<table border=1> 104<table border=1>
103<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr> 105<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr>
104<tr><td>stdout_printer</td><td>-o</td><td>(None)</td><td>Prints records to standard out</td></tr> 106<tr><td>stdout_printer</td><td>-o</td><td>(None)</td><td>Prints records to standard out. You should probably redirect the output to a file when you use this.</td></tr>
105<tr><td>visualizer</td><td>-v</td><td>(None)</td><td>Visualizes records</td></tr> 107<tr><td>visualizer</td><td>-v</td><td>(None)</td><td>Visualizes records. You should probably use filters in conjunction with this submodule. Otherwise, it'll take forever to render, and do you <i>really</i> want to visualize the <i>entire</i> trace, anyway?</td></tr>
108<tr><td>gedf_inversion_stat_printer</td><td>-i</td><td>number n</td><td>Outputs statistics about G-EDF inversions, and the n longest inversions. (You can specify n as 0 if you want.)</td></tr>
109</table>
110### Miscellaneous Submodules ###
111<table border=1>
112<tr><td>Name</td><td>Flag</td><td>Options</td><td>Description</td></tr>
113<tr><td>progress</td><td>-p</td><td>(None)</td><td>Outputs progress info (e.g number of records parsed so far, total time to process trace) to std error.</td></tr>
106</table> 114</table>
107 115
116## Specific Submodule Documentation ##
117
118Here, documentation is provided for submodules for which relevant information is not considered to be self-evident.
119
120(So far, it hasn't been necessary to add anything here, but once questions arise, it will become clear which submodules need to be documented more fully.)
121
122## Gotchas ##
123
124Here, documentation is provided for potentially confusing topics that are not documented elsewhere.
125
126### A Note on Time ###
127
128In general, Unit-Trace is agnostic about the units of time used in the trace files.
129This is not expected to change in the future.
130The exception is output modules.
131Currently, some output modules assume time is in nanoseconds; they convert it into milliseconds and print the 'ms' unit indicator, where convenient.
132This behavior may have to be modified in the future if non-nanosecond trace files are used.
133
134## Known Bugs ##
135
136Here, documentation of known bugs is provided.
137
138(No known bugs right now --- but there may be some hiding...)
139
140
108## Development ## 141## Development ##
109TODO: Information on how to contribute will be documented thoroughly here. 142Please send patches to [Mac Mollison][mac] or, if you are in the `litmus` group at UNC, just work with the git repo directly.
143
144The following "rules" are currently in place:
145
146- Please follow PEP 8 style guidelines when possible.
147- Update the documentation when you do something that makes it obsolete or incomplete
148- Don't break the overall architecture (described below)
149
150### Architecture ###
151If you are interested in contributing to Unit-Trace, you probably ought to know a bit about its overall architecture.
152
153Generally speaking, each Unit-Trace submodules is a Python generator. It accepts a Python iterator object as input and returns a Python iterator
154object as output. (You may want to look up the relevant Python terminology.)
155
156The exceptions are input submodules, which do not take any input other than a list of trace files, and the output submodules, which do not return
157iterator objects.
158
159The `unit-trace` script connects together the desired modules (i.e. those specified on the command line) using Python iterators.
160
161This architecture provides two advantages.
162First, because Python iterators are evaluated lazily, it is not necessary to read an entire trace file into memory in order to run `unit-trace` on it.
163Second, it provides an easy-to-understand programming model.
110 164
111## Documentation ## 165## Documentation ##
112The source code for this page is included in the `doc` folder that comes with Unit-Trace. 166The source code for this page is included in the `doc` folder that comes with Unit-Trace.
@@ -114,6 +168,9 @@ Contributors are required to make appropriate amendments to this documentation.
114 168
115The source is stored in [Markdown format][markdown] in the file `index.txt` and can be built into HTML with `make`. 169The source is stored in [Markdown format][markdown] in the file `index.txt` and can be built into HTML with `make`.
116 170
171## License ##
172Unit-Trace is released under the [Simplified BSD License][license].
173
117## Credits ## 174## Credits ##
118This project was created by and is maintained by the [Real-Time Systems Group][group] at the [University of North Carolina at Chapel Hill][uncch], 175This project was created by and is maintained by the [Real-Time Systems Group][group] at the [University of North Carolina at Chapel Hill][uncch],
119[Department of Computer Science][csdept]. A detailed explanation of the tool is available in [this paper][ospert_paper], from 176[Department of Computer Science][csdept]. A detailed explanation of the tool is available in [this paper][ospert_paper], from
@@ -130,3 +187,6 @@ We hope to have additional contributors in the future.
130[ospert_paper]: http://www.cs.unc.edu/%7Eanderson/papers/ospert09.pdf 187[ospert_paper]: http://www.cs.unc.edu/%7Eanderson/papers/ospert09.pdf
131[ospert]: http://www.artist-embedded.org/artist/Overview,1750.html 188[ospert]: http://www.artist-embedded.org/artist/Overview,1750.html
132[markdown]: http://daringfireball.net/projects/markdown/ 189[markdown]: http://daringfireball.net/projects/markdown/
190[mac]: mailto:mollison@cs.unc.edu
191[license]: LICENSE
192[release]: http://cs.unc.edu/~mollison/unit-trace/unit-trace.tar.gz
diff --git a/install.py b/install.py
index 935c7df..520a347 100755
--- a/install.py
+++ b/install.py
@@ -4,13 +4,12 @@
4# Description 4# Description
5################################################################################ 5################################################################################
6 6
7# This script installes (or re-installs) the unit_trace Python module, so that 7# This script installes (or re-installs) the unit_trace Python module, so that
8# the unit_trace library can be imported with `import unit_trace` by a Python 8# the unit_trace library can be imported with `import unit_trace` by a Python
9# script, from anywhere on the system. 9# script, from anywhere on the system.
10 10
11# The installation merely consists of copying the unit_trace directory to 11# The installation merely copies the unit-trace script and the unit_trace
12# the default site_packages directory (which is the appropriate place to 12# folder to ~/bin. You can also do this manually if you want to.
13# install Python packages).
14 13
15# We do not use the Distutils system, provided by Python, because that 14# We do not use the Distutils system, provided by Python, because that
16# is less convenient :-) 15# is less convenient :-)
@@ -30,7 +29,8 @@ import os
30################################################################################ 29################################################################################
31 30
32# Determine destination directory for unit_trace module 31# Determine destination directory for unit_trace module
33dst = os.path.join(sys.prefix,'lib/python2.6/site-packages/unit_trace') 32dst = '~/bin/unit_trace'
33dst = os.path.expanduser(dst)
34 34
35try: 35try:
36 # If the destination exists 36 # If the destination exists
@@ -40,17 +40,16 @@ try:
40 # Copy source to destination 40 # Copy source to destination
41 shutil.copytree('unit_trace', dst) 41 shutil.copytree('unit_trace', dst)
42except: 42except:
43 print("Error occurred. Make sure you run the script as root/with sudo.") 43 print "Unexpected error:", sys.exc_info()
44 exit() 44 exit()
45 45
46# Determine destination directory for unit-trace script 46# Determine destination directory for unit-trace script
47dst = '/usr/bin/unit-trace' 47dst = '~/bin/unit-trace'
48dst = os.path.expanduser(dst)
48try: 49try:
49 shutil.copyfile('unit-trace', dst) 50 shutil.copyfile('unit-trace', dst)
50 # Keep same permissions 51 # Keep same permissions
51 shutil.copystat('unit-trace', dst) 52 shutil.copystat('unit-trace', dst)
52except: 53except:
53 print("Error occurred. Make sure you run the script as root/with sudo.") 54 print "Unexpected error:", sys.exc_info()
54 exit() 55 exit()
55
56
diff --git a/runtests.py b/runtests.py
deleted file mode 100755
index 88dddf4..0000000
--- a/runtests.py
+++ /dev/null
@@ -1,47 +0,0 @@
1#!/usr/bin/python
2
3###############################################################################
4# Description
5###############################################################################
6
7# Unit Tests
8
9
10###############################################################################
11# Imports
12###############################################################################
13
14import trace_reader
15import naive_trace_reader
16import os
17
18###############################################################################
19# Trace files
20###############################################################################
21
22files = [
23'./sample_traces/st-g6-0.bin',
24'./sample_traces/st-g6-1.bin',
25'./sample_traces/st-g6-2.bin',
26'./sample_traces/st-g6-3.bin',
27]
28
29###############################################################################
30# Tests
31###############################################################################
32
33# Does our fancy trace reader get the same number of files as our naive one?
34# (See naive_trace_reader.py for further explanation)
35def test1():
36 stream = trace_reader.trace_reader(files)
37 num_records = len(list(stream))
38 stream = naive_trace_reader.trace_reader(files)
39 naive_num_records = len(list(stream))
40
41 # We need a +1 here because the fancy reader produces a 'meta' record
42 # indicating the number of CPUs
43 if num_records != naive_num_records + 1:
44 return "[FAIL]"
45 return "[SUCCESS]"
46
47print "Test 1: %s" % (test1())
diff --git a/unit-trace b/unit-trace
index 5328b99..a146874 100755
--- a/unit-trace
+++ b/unit-trace
@@ -15,7 +15,7 @@ import unit_trace
15from unit_trace import trace_reader 15from unit_trace import trace_reader
16from unit_trace import sanitizer 16from unit_trace import sanitizer
17from unit_trace import gedf_test 17from unit_trace import gedf_test
18from unit_trace import stats 18from unit_trace import gedf_inversion_stat_printer
19from unit_trace import stdout_printer 19from unit_trace import stdout_printer
20from unit_trace import viz 20from unit_trace import viz
21from unit_trace import progress 21from unit_trace import progress
@@ -36,8 +36,8 @@ parser.add_option("-p", "--progress", action="store_true", dest="progress",
36 default=False, help="Show parsing progress") 36 default=False, help="Show parsing progress")
37parser.add_option("-g", "--gedf", action="store_true", dest="gedf", 37parser.add_option("-g", "--gedf", action="store_true", dest="gedf",
38 default=False, help="Run G-EDF test") 38 default=False, help="Run G-EDF test")
39parser.add_option("-i", "--info", action="store_true", dest="info", 39parser.add_option("-i", "--info", dest="num_inversions", default=-1, type=int,
40 default=False, help="Show statistical info") 40 help="Print the n longest inversions, plus statistical info")
41parser.add_option("-o", "--stdout", action="store_true", dest="stdout", 41parser.add_option("-o", "--stdout", action="store_true", dest="stdout",
42 default=False, help="Use stdout_printer") 42 default=False, help="Use stdout_printer")
43parser.add_option("-v", "--visual", action="store_true", dest="visualize", 43parser.add_option("-v", "--visual", action="store_true", dest="visualize",
@@ -90,10 +90,6 @@ if options.progress is True:
90if options.gedf is True: 90if options.gedf is True:
91 stream = gedf_test.gedf_test(stream) 91 stream = gedf_test.gedf_test(stream)
92 92
93# Produce a statistics record
94if options.info is True:
95 stream = stats.stats(stream)
96
97# Filter some records out 93# Filter some records out
98#def my_filter(record): 94#def my_filter(record):
99# if record.record_type == 'error' and record.type_name == 'inversion_end': 95# if record.record_type == 'error' and record.type_name == 'inversion_end':
@@ -102,13 +98,25 @@ if options.info is True:
102# return True 98# return True
103#stream = filter(my_filter, stream) 99#stream = filter(my_filter, stream)
104 100
105# Output 101# Tee by the number of possible outputs
106if options.stdout is True and options.visualize is True: 102# This might cause a performance bottleneck that could be eliminated by
107 import itertools 103# checking how many we actually need :-)
108 stream1, stream2 = itertools.tee(stream,2) 104import itertools
105stream1, stream2, stream3 = itertools.tee(stream,3)
106
107# Call standard out printer
108if options.stdout is True:
109 stdout_printer.stdout_printer(stream1) 109 stdout_printer.stdout_printer(stream1)
110 viz.visualizer.visualizer(stream2) 110
111elif options.stdout is True: 111# Print G_EDF inversion statistics
112 stdout_printer.stdout_printer(stream) 112if options.num_inversions > -1:
113elif options.visualize is True: 113 if options.gedf is not True:
114 viz.visualizer.visualizer(stream) 114 import sys
115 sys.stderr.write("You must enable the G-EDF test module to print" +
116 " G-EDF inversion statistics\n")
117 else:
118 gedf_inversion_stat_printer.gedf_inversion_stat_printer(stream2,options.num_inversions)
119
120# Call visualizer
121if options.visualize is True:
122 viz.visualizer.visualizer(stream3)
diff --git a/unit_trace/earliest.py b/unit_trace/earliest.py
index eaf0cf8..03e08d7 100644
--- a/unit_trace/earliest.py
+++ b/unit_trace/earliest.py
@@ -11,11 +11,12 @@
11def earliest(stream, earliest): 11def earliest(stream, earliest):
12 for record in stream: 12 for record in stream:
13 if record.record_type=="event": 13 if record.record_type=="event":
14 if record.when < earliest: 14 if record.id < earliest:
15 pass 15 pass
16 else: 16 else:
17 yield record 17 yield record
18 break 18 break
19 yield record 19 else:
20 yield record
20 for record in stream: 21 for record in stream:
21 yield record 22 yield record
diff --git a/unit_trace/gedf_inversion_stat_printer.py b/unit_trace/gedf_inversion_stat_printer.py
new file mode 100644
index 0000000..c35632a
--- /dev/null
+++ b/unit_trace/gedf_inversion_stat_printer.py
@@ -0,0 +1,82 @@
1###############################################################################
2# Description
3###############################################################################
4# Compute and print G-EDF inversion statistics
5
6
7###############################################################################
8# Public Functions
9###############################################################################
10
11def gedf_inversion_stat_printer(stream,num):
12
13 # State
14 min_inversion = -1
15 max_inversion = -1
16 sum_inversions = 0
17 num_inversions = 0
18 longest_inversions = []
19
20 # Iterate over records, updating state
21 for record in stream:
22 if record.type_name == 'inversion_end':
23 length = record.job.inversion_end - record.job.inversion_start
24 if length > 0:
25 num_inversions += 1
26 if length > max_inversion:
27 max_inversion = length
28 if length < min_inversion or min_inversion == -1:
29 min_inversion = length
30 sum_inversions += length
31 if len(longest_inversions) == num:
32 if num==0:
33 continue
34 si = longest_inversions[0]
35 if length > (si.job.inversion_end -
36 si.job.inversion_start):
37 longest_inversions.append(record)
38 longest_inversions = _sort_longest_inversions(
39 longest_inversions)
40 del longest_inversions[0]
41 else:
42 longest_inversions.append(record)
43 longest_inversions = _sort_longest_inversions(
44 longest_inversions)
45
46 # We've seen all records.
47 # Further update state
48 if num_inversions > 0:
49 avg_inversion = int(sum_inversions / num_inversions)
50 else:
51 avg_inversion = 0
52
53 # Print out our information
54 # NOTE: Here, we assume nanoseconds as the time unit.
55 # May have to be changed in the future.
56 print "Num inversions: %d" % (num_inversions)
57 print "Min inversion: %f ms" % (float(min_inversion) / 1000000)
58 print "Max inversion: %f ms" % (float(max_inversion) / 1000000)
59 print "Avg inversion: %f ms" % (float(avg_inversion) / 1000000)
60 for inv in longest_inversions:
61 print ""
62 print "Inversion record IDs: (%d, %d)" % (inv.inversion_start_id,
63 inv.id)
64 print("Triggering Event IDs: (%d, %d)" %
65 (inv.inversion_start_triggering_event_id,
66 inv.triggering_event_id))
67 print "Time: %d" % (inv.job.inversion_end)
68 # NOTE: Here, we assume nanoseconds as the time unit.
69 # May have to be changed in the future.
70 print "Duration: %f ms" % (
71 float(inv.job.inversion_end - inv.job.inversion_start) / 1000000)
72 print "Job: %d.%d" % (inv.job.pid,inv.job.job)
73 print "Deadline: %d" % (inv.job.deadline)
74 print ""
75
76
77def _sort_longest_inversions(longest_inversions):
78 """ Sort longest inversions"""
79 def sortkey(x):
80 return x.job.inversion_end - x.job.inversion_start
81 longest_inversions.sort(key=sortkey)
82 return longest_inversions
diff --git a/unit_trace/gedf_test.py b/unit_trace/gedf_test.py
index 8457901..478a852 100644
--- a/unit_trace/gedf_test.py
+++ b/unit_trace/gedf_test.py
@@ -9,6 +9,7 @@
9############################################################################### 9###############################################################################
10 10
11import copy 11import copy
12import sys
12 13
13 14
14############################################################################### 15###############################################################################
@@ -17,39 +18,54 @@ import copy
17 18
18def gedf_test(stream): 19def gedf_test(stream):
19 20
20 # Two lists to model the system: tasks occupying a CPU and tasks eligible 21 # System model
21 # to do so. Also, m = the number of CPUs. 22 on_cpu = [] # Tasks on a CPU
22 eligible = [] 23 off_cpu = [] # Tasks not on a CPU
23 on_cpu = [] 24 m = None # CPUs
24 m = None
25 25
26 # Time of the last record we saw. Only run the G-EDF test when the time 26 # Time of the last record we saw. Only run the G-EDF test when the time
27 # is updated. 27 # is updated.
28 last_time = None 28 last_time = None
29 29
30 # First event for the latest timestamp. This is used to match up
31 # inversion starts and ends with the first event from the previous
32 # timestamp, which is the first event that could have triggered
33 # the inversion start or end.
34 first_event_this_timestamp = 0
35
30 for record in stream: 36 for record in stream:
31 if record.record_type != "event": 37 if record.record_type != "event":
32 if record.record_type == "meta" and record.type_name == "num_cpus": 38 if record.record_type == "meta" and record.type_name == "num_cpus":
33 m = record.num_cpus 39 m = record.num_cpus
34 continue 40 continue
35 41
42 # Bookkeeping iff the timestamp has moved forward.
36 # Check for inversion starts and ends and yield them. 43 # Check for inversion starts and ends and yield them.
37 # Only to the check when time has moved forward. 44 # (It is common to have records with simultaneous timestamps,
38 # (It is common to have records with simultaneous timestamps.) 45 # so we only check when the time has moved forward)
46 # Also, need to update the first_event_this_timestamp variable
39 if last_time is not None and last_time != record.when: 47 if last_time is not None and last_time != record.when:
40 errors = _gedf_check(eligible,on_cpu,record.when,m) 48 errors = _gedf_check(off_cpu,on_cpu,last_time,m,
49 first_event_this_timestamp)
50 first_event_this_timestamp = record.id
41 for error in errors: 51 for error in errors:
42 yield error 52 yield error
43 53
44 # Add a newly-released Job to the eligible queue 54 # Add a newly-released Job to the off_cpu queue
45 if record.type_name == 'release': 55 if record.type_name == 'release':
46 eligible.append(Job(record)) 56 off_cpu.append(Job(record))
47 57
48 # Move a Job from the eligible queue to on_cpu 58 # Move a Job from the off_cpu queue to on_cpu
49 elif record.type_name == 'switch_to': 59 elif record.type_name == 'switch_to':
50 pos = _find_job(record,eligible) 60 pos = _find_job(record,off_cpu)
51 job = eligible[pos] 61 if pos is None:
52 del eligible[pos] 62 msg = "Event %d tried to switch to a job that was not on the"
63 msg += " off_cpu queue\n"
64 msg = msg % (record.id)
65 sys.stderr.write(msg)
66 exit()
67 job = off_cpu[pos]
68 del off_cpu[pos]
53 on_cpu.append(job) 69 on_cpu.append(job)
54 70
55 # Mark a Job as completed. 71 # Mark a Job as completed.
@@ -60,17 +76,41 @@ def gedf_test(stream):
60 if pos is not None: 76 if pos is not None:
61 on_cpu[pos].is_complete = True 77 on_cpu[pos].is_complete = True
62 else: 78 else:
63 pos = _find_job(record,eligible) 79 pos = _find_job(record,off_cpu)
64 del eligible[pos] 80 del off_cpu[pos]
65 81
66 # A job is switched away from a CPU. If it has 82 # A job is switched away from a CPU. If it has
67 # been marked as complete, remove it from the model. 83 # been marked as complete, remove it from the model.
68 elif record.type_name == 'switch_away': 84 elif record.type_name == 'switch_away':
69 pos = _find_job(record,on_cpu) 85 pos = _find_job(record,on_cpu)
86 if pos is None:
87 msg = ("Event %d tried to switch to switch away a job" +
88 " that was not running\n")
89 msg = msg % (record.id)
90 sys.stderr.write(msg)
91 exit()
70 job = on_cpu[pos] 92 job = on_cpu[pos]
71 del on_cpu[pos] 93 del on_cpu[pos]
72 if job.is_complete == False: 94 if job.is_complete == False:
73 eligible.append(job) 95 off_cpu.append(job)
96
97 # A job has been blocked.
98 elif record.type_name == 'block':
99 pos = _find_job(record,on_cpu)
100 # What if the job is blocked AFTER being switched away?
101 # This is a bug in some versions of LITMUS.
102 if pos is None:
103 pos = _find_job(record,off_cpu)
104 job = off_cpu[pos]
105 else:
106 job = on_cpu[pos]
107 job.is_blocked = True
108
109 # A job is resumed
110 elif record.type_name == 'resume':
111 pos = _find_job(record,off_cpu)
112 job = off_cpu[pos]
113 job.is_blocked = False
74 114
75 last_time = record.when 115 last_time = record.when
76 yield record 116 yield record
@@ -86,22 +126,33 @@ class Job(object):
86 self.job = record.job 126 self.job = record.job
87 self.deadline = record.deadline 127 self.deadline = record.deadline
88 self.is_complete = False 128 self.is_complete = False
129 self.is_blocked = False
89 self.inversion_start = None 130 self.inversion_start = None
90 self.inversion_end = None 131 self.inversion_end = None
132 self.inversion_start_id = None
133 self.inversion_start_triggering_event_id = None
91 def __str__(self): 134 def __str__(self):
92 return "(%d.%d:%d)" % (self.pid,self.job,self.deadline) 135 return "(%d.%d:%d)" % (self.pid,self.job,self.deadline)
93 136
94# G-EDF errors: the start or end of an inversion 137# G-EDF errors: the start or end of an inversion
95class Error(object): 138class Error(object):
96 def __init__(self, job, eligible, on_cpu): 139 id = 0
140 def __init__(self, job, off_cpu, on_cpu,first_event_this_timestamp):
141 Error.id += 1
142 self.id = Error.id
97 self.job = copy.copy(job) 143 self.job = copy.copy(job)
98 self.eligible = copy.copy(eligible) 144 self.off_cpu = copy.copy(off_cpu)
99 self.on_cpu = copy.copy(on_cpu) 145 self.on_cpu = copy.copy(on_cpu)
100 self.record_type = 'error' 146 self.record_type = 'error'
147 self.triggering_event_id = first_event_this_timestamp
101 if job.inversion_end is None: 148 if job.inversion_end is None:
102 self.type_name = 'inversion_start' 149 self.type_name = 'inversion_start'
150 job.inversion_start_id = self.id
151 job.inversion_start_triggering_event_id = self.triggering_event_id
103 else: 152 else:
104 self.type_name = 'inversion_end' 153 self.type_name = 'inversion_end'
154 self.inversion_start_id = job.inversion_start_id
155 self.inversion_start_triggering_event_id = job.inversion_start_triggering_event_id
105 156
106# Returns the position of a Job in a list, or None 157# Returns the position of a Job in a list, or None
107def _find_job(record,list): 158def _find_job(record,list):
@@ -111,17 +162,20 @@ def _find_job(record,list):
111 return None 162 return None
112 163
113# Return records for any inversion_starts and inversion_ends 164# Return records for any inversion_starts and inversion_ends
114def _gedf_check(eligible,on_cpu,when,m): 165def _gedf_check(off_cpu,on_cpu,when,m,first_event_this_timestamp):
115 166
116 # List of error records to be returned 167 # List of error records to be returned
117 errors = [] 168 errors = []
118 169
119 # List of all jobs that are not complete 170 # List of all jobs that are contending for the CPU (neither complete nor
171 # blocked)
120 all = [] 172 all = []
121 for x in on_cpu: 173 for x in on_cpu:
122 if x.is_complete is not True: 174 if x.is_complete is not True and x.is_blocked is not True:
175 all.append(x)
176 for x in off_cpu:
177 if x.is_blocked is not True:
123 all.append(x) 178 all.append(x)
124 all += eligible
125 179
126 # Sort by on_cpu and then by deadline. sort() is guaranteed to be stable. 180 # Sort by on_cpu and then by deadline. sort() is guaranteed to be stable.
127 # Thus, this gives us jobs ordered by deadline with preference to those 181 # Thus, this gives us jobs ordered by deadline with preference to those
@@ -129,35 +183,45 @@ def _gedf_check(eligible,on_cpu,when,m):
129 all.sort(key=lambda x: 0 if (x in on_cpu) else 1) 183 all.sort(key=lambda x: 0 if (x in on_cpu) else 1)
130 all.sort(key=lambda x: x.deadline) 184 all.sort(key=lambda x: x.deadline)
131 185
132 # Check those that actually should be running 186 # Check those that actually should be running, to look for priority
187 # inversions
133 for x in range(0,min(m,len(all))): 188 for x in range(0,min(m,len(all))):
134 job = all[x] 189 job = all[x]
135 190
136 # It's not running and an inversion_start has not been recorded 191 # It's not running and an inversion_start has not been recorded
137 if job not in on_cpu and job.inversion_start is None: 192 if job not in on_cpu and job.inversion_start is None:
138 job.inversion_start = when 193 job.inversion_start = when
139 errors.append(Error(job, eligible, on_cpu)) 194 errors.append(Error(job, off_cpu, on_cpu,
195 first_event_this_timestamp))
140 196
141 # It is running and an inversion_start exists (i.e. it it still 197 # It is running and an inversion_start exists (i.e. it it still
142 # marked as being inverted) 198 # marked as being inverted)
143 elif job in on_cpu and job.inversion_start is not None: 199 elif job in on_cpu and job.inversion_start is not None:
144 job.inversion_end = when 200 job.inversion_end = when
145 errors.append(Error(job, eligible, on_cpu)) 201 errors.append(Error(job, off_cpu, on_cpu,
202 first_event_this_timestamp))
146 job.inversion_start = None 203 job.inversion_start = None
147 job.inversion_end = None 204 job.inversion_end = None
148 205
149 # Check those that actually should not be running 206 # Check those that actually should not be running, to record the end of any
207 # priority inversions
150 for x in range(m,len(all)): 208 for x in range(m,len(all)):
151 job = all[x] 209 job = all[x]
152
153 # It actually is running. We don't care.
154
155 # It isn't running, but an inversion_start exists (i.e. it is still
156 # marked as being inverted)
157 if job not in on_cpu and job.inversion_start is not None: 210 if job not in on_cpu and job.inversion_start is not None:
158 job.inversion_end = when 211 job.inversion_end = when
159 errors.append(Error(job, eligible, on_cpu)) 212 errors.append(Error(job, off_cpu, on_cpu,
213 first_event_this_timestamp))
160 job.inversion_start = None 214 job.inversion_start = None
161 job.inversion_end = None 215 job.inversion_end = None
162 216
217 # Look for priority inversions among blocked tasks and end them
218 all = filter(lambda x:x.is_blocked and x.inversion_start is not None,
219 on_cpu + off_cpu)
220 for job in all:
221 job.inversion_end = when
222 errors.append(Error(job, off_cpu, on_cpu,
223 first_event_this_timestamp))
224 job.inversion_start = None
225 job.inversion_end = None
226
163 return errors 227 return errors
diff --git a/unit_trace/latest.py b/unit_trace/latest.py
index 4abd3a2..38676a0 100644
--- a/unit_trace/latest.py
+++ b/unit_trace/latest.py
@@ -11,7 +11,7 @@
11def latest(stream, latest): 11def latest(stream, latest):
12 for record in stream: 12 for record in stream:
13 if record.record_type=="event": 13 if record.record_type=="event":
14 if record.when > latest: 14 if record.id > latest:
15 break 15 break
16 else: 16 else:
17 yield record 17 yield record
diff --git a/unit_trace/naive_trace_reader.py b/unit_trace/naive_trace_reader.py
deleted file mode 100644
index 0f117b8..0000000
--- a/unit_trace/naive_trace_reader.py
+++ /dev/null
@@ -1,165 +0,0 @@
1###############################################################################
2# Description
3###############################################################################
4
5# trace_reader(files) returns an iterator which produces records
6# OUT OF ORDER from the files given. (the param is a list of files.)
7#
8# The non-naive trace_reader has a lot of complex logic which attempts to
9# produce records in order (even though they are being pulled from multiple
10# files which themselves are only approximately ordered). This trace_reader
11# attempts to be as simple as possible and is used in the unit tests to
12# make sure the total number of records read by the normal trace_reader is
13# the same as the number of records read by this one.
14
15###############################################################################
16# Imports
17###############################################################################
18
19import struct
20
21
22###############################################################################
23# Public functions
24###############################################################################
25
26# Generator function returning an iterable over records in a trace file.
27def trace_reader(files):
28 for file in files:
29 f = open(file,'rb')
30 while True:
31 data = f.read(RECORD_HEAD_SIZE)
32 try:
33 type_num = struct.unpack_from('b',data)[0]
34 except struct.error:
35 break #We read to the end of the file
36 type = _get_type(type_num)
37 try:
38 values = struct.unpack_from(StHeader.format +
39 type.format,data)
40 record_dict = dict(zip(type.keys,values))
41 except struct.error:
42 f.close()
43 print "Invalid record detected, stopping."
44 exit()
45
46 # Convert the record_dict into an object
47 record = _dict2obj(record_dict)
48
49 # Give it a type name (easier to work with than type number)
50 record.type_name = _get_type_name(type_num)
51
52 # All records should have a 'record type' field.
53 # e.g. these are 'event's as opposed to 'error's
54 record.record_type = "event"
55
56 # If there is no timestamp, set the time to 0
57 if 'when' not in record.__dict__.keys():
58 record.when = 0
59
60 yield record
61
62###############################################################################
63# Private functions
64###############################################################################
65
66# Convert a dict into an object
67def _dict2obj(d):
68 class Obj: pass
69 o = Obj()
70 for key in d.keys():
71 o.__dict__[key] = d[key]
72 return o
73
74###############################################################################
75# Trace record data types and accessor functions
76###############################################################################
77
78# Each class below represents a type of event record. The format attribute
79# specifies how to decode the binary record and the keys attribute
80# specifies how to name the pieces of information decoded. Note that all
81# event records have a common initial 24 bytes, represented by the StHeader
82# class.
83
84RECORD_HEAD_SIZE = 24
85
86class StHeader(object):
87 format = '<bbhi'
88 formatStr = struct.Struct(format)
89 keys = ['type','cpu','pid','job']
90 message = 'The header.'
91
92class StNameData(object):
93 format = '16s'
94 formatStr = struct.Struct(StHeader.format + format)
95 keys = StHeader.keys + ['name']
96 message = 'The name of the executable of this process.'
97
98class StParamData(object):
99 format = 'IIIc'
100 formatStr = struct.Struct(StHeader.format + format)
101 keys = StHeader.keys + ['wcet','period','phase','partition']
102 message = 'Regular parameters.'
103
104class StReleaseData(object):
105 format = 'QQ'
106 formatStr = struct.Struct(StHeader.format + format)
107 keys = StHeader.keys + ['when','deadline']
108 message = 'A job was/is going to be released.'
109
110#Not yet used by Sched Trace
111class StAssignedData(object):
112 format = 'Qc'
113 formatStr = struct.Struct(StHeader.format + format)
114 keys = StHeader.keys + ['when','target']
115 message = 'A job was assigned to a CPU.'
116
117class StSwitchToData(object):
118 format = 'QI'
119 formatStr = struct.Struct(StHeader.format + format)
120 keys = StHeader.keys + ['when','exec_time']
121 message = 'A process was switched to on a given CPU.'
122
123class StSwitchAwayData(object):
124 format = 'QI'
125 formatStr = struct.Struct(StHeader.format + format)
126 keys = StHeader.keys + ['when','exec_time']
127 message = 'A process was switched away on a given CPU.'
128
129class StCompletionData(object):
130 format = 'Q3xcc'
131 formatStr = struct.Struct(StHeader.format + format)
132 keys = StHeader.keys + ['when','forced?','flags']
133 message = 'A job completed.'
134
135class StBlockData(object):
136 format = 'Q'
137 formatStr = struct.Struct(StHeader.format + format)
138 keys = StHeader.keys + ['when']
139 message = 'A task blocks.'
140
141class StResumeData(object):
142 format = 'Q'
143 formatStr = struct.Struct(StHeader.format + format)
144 keys = StHeader.keys + ['when']
145 message = 'A task resumes.'
146
147class StSysReleaseData(object):
148 format = 'QQ'
149 formatStr = struct.Struct(StHeader.format + format)
150 keys = StHeader.keys + ['when','release']
151 message = 'All tasks have checked in, task system released by user'
152
153# Return the binary data type, given the type_num
154def _get_type(type_num):
155 types = [None,StNameData,StParamData,StReleaseData,StAssignedData,
156 StSwitchToData,StSwitchAwayData,StCompletionData,StBlockData,
157 StResumeData,StSysReleaseData]
158 return types[type_num]
159
160# Return the type name, given the type_num (this is simply a convenience to
161# programmers of other modules)
162def _get_type_name(type_num):
163 type_names = [None,"name","params","release","assign","switch_to",
164 "switch_away","completion","block","resume","sys_release"]
165 return type_names[type_num]
diff --git a/unit_trace/progress.py b/unit_trace/progress.py
index d987ecd..d0f0482 100644
--- a/unit_trace/progress.py
+++ b/unit_trace/progress.py
@@ -25,7 +25,7 @@ def progress(stream):
25 25
26 start_time = 0 26 start_time = 0
27 count = 0 27 count = 0
28 28
29 for record in stream: 29 for record in stream:
30 if record.record_type=="event": 30 if record.record_type=="event":
31 count += 1 31 count += 1
diff --git a/unit_trace/sanitizer.py b/unit_trace/sanitizer.py
index 79315cc..598379a 100644
--- a/unit_trace/sanitizer.py
+++ b/unit_trace/sanitizer.py
@@ -49,5 +49,5 @@ def sanitizer(stream):
49 if record.type_name == 'switch_away': 49 if record.type_name == 'switch_away':
50 if (record.pid,record.job) not in jobs_switched_to: 50 if (record.pid,record.job) not in jobs_switched_to:
51 record.job -= 1 51 record.job -= 1
52 52
53 yield record 53 yield record
diff --git a/unit_trace/stats.py b/unit_trace/stats.py
deleted file mode 100644
index 34a842f..0000000
--- a/unit_trace/stats.py
+++ /dev/null
@@ -1,39 +0,0 @@
1###############################################################################
2# Description
3###############################################################################
4# Compute and produce statistics
5
6
7###############################################################################
8# Public Functions
9###############################################################################
10
11def stats(stream):
12 min_inversion = -1
13 max_inversion = -1
14 sum_inversions = 0
15 num_inversions = 0
16 for record in stream:
17 if record.type_name == 'inversion_end':
18 length = record.job.inversion_end - record.job.inversion_start
19 if length > 0:
20 num_inversions += 1
21 if length > max_inversion:
22 max_inversion = length
23 if length < min_inversion or min_inversion == -1:
24 min_inversion = length
25 sum_inversions += length
26 yield record
27 if num_inversions > 0:
28 avg_inversion = int(sum_inversions / num_inversions)
29 else:
30 avg_inversion = 0
31 class Obj(object): pass
32 rec = Obj()
33 rec.record_type = "meta"
34 rec.type_name = "stats"
35 rec.num_inversions = num_inversions
36 rec.min_inversion = min_inversion
37 rec.max_inversion = max_inversion
38 rec.avg_inversion = avg_inversion
39 yield rec
diff --git a/unit_trace/stdout_printer.py b/unit_trace/stdout_printer.py
index f8d9a84..b70b31a 100644
--- a/unit_trace/stdout_printer.py
+++ b/unit_trace/stdout_printer.py
@@ -27,17 +27,20 @@ def stdout_printer(stream):
27############################################################################### 27###############################################################################
28 28
29def _print_event(record): 29def _print_event(record):
30 print "Event ID: %d" % (record.id)
30 print "Job: %d.%d" % (record.pid,record.job) 31 print "Job: %d.%d" % (record.pid,record.job)
31 print "Type: %s" % (record.type_name) 32 print "Type: %s" % (record.type_name)
32 print "Time: %d" % (record.when) 33 print "Time: %d" % (record.when)
33 34
34def _print_inversion_start(record): 35def _print_inversion_start(record):
35 print "Type: %s" % ("Inversion start") 36 print "Type: %s" % ("Inversion start")
37 print "Inversion Record IDs: (%d, U)" % (record.id)
38 print "Triggering Event IDs: (%d, U)" % (record.triggering_event_id)
36 print "Time: %d" % (record.job.inversion_start) 39 print "Time: %d" % (record.job.inversion_start)
37 print "Job: %d.%d" % (record.job.pid,record.job.job) 40 print "Job: %d.%d" % (record.job.pid,record.job.job)
38 print "Deadline: %d" % (record.job.deadline) 41 print "Deadline: %d" % (record.job.deadline)
39 print "Eligible: ", 42 print "Off CPU: ",
40 for job in record.eligible: 43 for job in record.off_cpu:
41 print str(job) + " ", 44 print str(job) + " ",
42 print 45 print
43 print "On CPU: ", 46 print "On CPU: ",
@@ -47,23 +50,23 @@ def _print_inversion_start(record):
47 50
48def _print_inversion_end(record): 51def _print_inversion_end(record):
49 print "Type: %s" % ("Inversion end") 52 print "Type: %s" % ("Inversion end")
53 print "Inversion record IDs: (%d, %d)" % (record.inversion_start_id,
54 record.id)
55 print("Triggering Event IDs: (%d, %d)" %
56 (record.inversion_start_triggering_event_id,
57 record.triggering_event_id))
50 print "Time: %d" % (record.job.inversion_end) 58 print "Time: %d" % (record.job.inversion_end)
51 print "Duration: %d" % ( 59 # NOTE: Here, we assume nanoseconds as the time unit.
52 record.job.inversion_end - record.job.inversion_start) 60 # May have to be changed in the future.
61 print "Duration: %f ms" % (
62 float(record.job.inversion_end - record.job.inversion_start)/1000000)
53 print "Job: %d.%d" % (record.job.pid,record.job.job) 63 print "Job: %d.%d" % (record.job.pid,record.job.job)
54 print "Deadline: %d" % (record.job.deadline) 64 print "Deadline: %d" % (record.job.deadline)
55 print "Eligible: ", 65 print "Off CPU: ",
56 for job in record.eligible: 66 for job in record.off_cpu:
57 print str(job) + " ", 67 print str(job) + " ",
58 print 68 print
59 print "On CPU: ", 69 print "On CPU: ",
60 for job in record.on_cpu: 70 for job in record.on_cpu:
61 print str(job) + " ", 71 print str(job) + " ",
62 print #newline 72 print #newline
63
64def _print_stats(record):
65 print "Inversion statistics"
66 print "Num inversions: %d" % (record.num_inversions)
67 print "Min inversion: %d" % (record.min_inversion)
68 print "Max inversion: %d" % (record.max_inversion)
69 print "Avg inversion: %d" % (record.avg_inversion)
diff --git a/unit_trace/trace_reader.py b/unit_trace/trace_reader.py
index 44a3c75..a4c3c05 100644
--- a/unit_trace/trace_reader.py
+++ b/unit_trace/trace_reader.py
@@ -5,7 +5,7 @@
5# trace_reader(files) returns an iterator which produces records 5# trace_reader(files) returns an iterator which produces records
6# in order from the files given. (the param is a list of files.) 6# in order from the files given. (the param is a list of files.)
7# 7#
8# Each record is just a Python object. It is guaranteed to have the following 8# Each record is just a Python object. It is guaranteed to have the following
9# attributes: 9# attributes:
10# - 'pid': pid of the task 10# - 'pid': pid of the task
11# - 'job': job number for that task 11# - 'job': job number for that task
@@ -58,15 +58,28 @@ def trace_reader(files):
58 for file in files: 58 for file in files:
59 file_iter = _get_file_iter(file) 59 file_iter = _get_file_iter(file)
60 file_iters.append(file_iter) 60 file_iters.append(file_iter)
61 file_iter_buff.append([file_iter.next()]) 61 try:
62 62 file_iter_buff.append([file_iter.next()])
63 # We keep 100 records in each buffer and then keep the buffer sorted 63 # What if there isn't a single valid record in a trace file?
64 # file_iter.next() will raise a StopIteration that we need to catch
65 except:
66 # Forget that file iter
67 file_iters.pop()
68
69 # We keep 200 records in each buffer and then keep the buffer sorted
64 # This is because records may have been recorded slightly out of order 70 # This is because records may have been recorded slightly out of order
65 # This cannot guarantee records are produced in order, but it makes it 71 # This cannot guarantee records are produced in order, but it makes it
66 # overwhelmingly probably. 72 # overwhelmingly probably.
73 # The 'try' and 'except' catches the case where there are less than 200
74 # records in a file (throwing a StopIteration) which otherwise would
75 # propogate up and cause the trace_reader generator itself to throw a
76 # StopIteration.
67 for x in range(0,len(file_iter_buff)): 77 for x in range(0,len(file_iter_buff)):
68 for y in range(0,100): 78 try:
69 file_iter_buff[x].append(file_iters[x].next()) 79 for y in range(0,200):
80 file_iter_buff[x].append(file_iters[x].next())
81 except StopIteration:
82 pass
70 for x in range(0,len(file_iter_buff)): 83 for x in range(0,len(file_iter_buff)):
71 file_iter_buff[x] = sorted(file_iter_buff[x],key=lambda rec: rec.when) 84 file_iter_buff[x] = sorted(file_iter_buff[x],key=lambda rec: rec.when)
72 85
@@ -75,6 +88,9 @@ def trace_reader(files):
75 # fatally if they are not. 88 # fatally if they are not.
76 last_time = None 89 last_time = None
77 90
91 # We want to give records ID numbers so users can filter by ID
92 id = 0
93
78 # Keep pulling records as long as we have a buffer 94 # Keep pulling records as long as we have a buffer
79 while len(file_iter_buff) > 0: 95 while len(file_iter_buff) > 0:
80 96
@@ -109,8 +125,12 @@ def trace_reader(files):
109 else: 125 else:
110 last_time = earliest.when 126 last_time = earliest.when
111 127
128 # Give the record an id number
129 id += 1
130 earliest.id = id
131
112 # Yield the record 132 # Yield the record
113 yield earliest 133 yield earliest
114 134
115############################################################################### 135###############################################################################
116# Private functions 136# Private functions
@@ -123,17 +143,23 @@ def _get_file_iter(file):
123 data = f.read(RECORD_HEAD_SIZE) 143 data = f.read(RECORD_HEAD_SIZE)
124 try: 144 try:
125 type_num = struct.unpack_from('b',data)[0] 145 type_num = struct.unpack_from('b',data)[0]
126 except struct.error: 146 except struct.error:
127 break #We read to the end of the file 147 break #We read to the end of the file
128 type = _get_type(type_num)
129 try: 148 try:
130 values = struct.unpack_from(StHeader.format + 149 type = _get_type(type_num)
150 except:
151 sys.stderr.write("Skipping record with invalid type num: %d\n" %
152 (type_num))
153 continue
154 try:
155 values = struct.unpack_from(StHeader.format +
131 type.format,data) 156 type.format,data)
132 record_dict = dict(zip(type.keys,values)) 157 record_dict = dict(zip(type.keys,values))
133 except struct.error: 158 except struct.error:
134 f.close() 159 f.close()
135 print "Invalid record detected, stopping." 160 sys.stderr.write("Skipping record that does not match proper" +
136 exit() 161 " struct formatting\n")
162 continue
137 163
138 # Convert the record_dict into an object 164 # Convert the record_dict into an object
139 record = _dict2obj(record_dict) 165 record = _dict2obj(record_dict)
@@ -157,7 +183,7 @@ def _dict2obj(d):
157 o = Obj() 183 o = Obj()
158 for key in d.keys(): 184 for key in d.keys():
159 o.__dict__[key] = d[key] 185 o.__dict__[key] = d[key]
160 return o 186 return o
161 187
162############################################################################### 188###############################################################################
163# Trace record data types and accessor functions 189# Trace record data types and accessor functions
@@ -183,7 +209,7 @@ class StNameData:
183 keys = StHeader.keys + ['name'] 209 keys = StHeader.keys + ['name']
184 message = 'The name of the executable of this process.' 210 message = 'The name of the executable of this process.'
185 211
186class StParamData: 212class StParamData:
187 format = 'IIIc' 213 format = 'IIIc'
188 formatStr = struct.Struct(StHeader.format + format) 214 formatStr = struct.Struct(StHeader.format + format)
189 keys = StHeader.keys + ['wcet','period','phase','partition'] 215 keys = StHeader.keys + ['wcet','period','phase','partition']
@@ -195,7 +221,7 @@ class StReleaseData:
195 keys = StHeader.keys + ['when','deadline'] 221 keys = StHeader.keys + ['when','deadline']
196 message = 'A job was/is going to be released.' 222 message = 'A job was/is going to be released.'
197 223
198#Not yet used by Sched Trace 224#Not yet used by Sched Trace
199class StAssignedData: 225class StAssignedData:
200 format = 'Qc' 226 format = 'Qc'
201 formatStr = struct.Struct(StHeader.format + format) 227 formatStr = struct.Struct(StHeader.format + format)
@@ -244,6 +270,8 @@ def _get_type(type_num):
244 types = [None,StNameData,StParamData,StReleaseData,StAssignedData, 270 types = [None,StNameData,StParamData,StReleaseData,StAssignedData,
245 StSwitchToData,StSwitchAwayData,StCompletionData,StBlockData, 271 StSwitchToData,StSwitchAwayData,StCompletionData,StBlockData,
246 StResumeData,StSysReleaseData] 272 StResumeData,StSysReleaseData]
273 if type_num > len(types)-1 or type_num < 1:
274 raise Exception
247 return types[type_num] 275 return types[type_num]
248 276
249# Return the type name, given the type_num (this is simply a convenience to 277# Return the type name, given the type_num (this is simply a convenience to
diff --git a/unit_trace/viz/draw.py b/unit_trace/viz/draw.py
new file mode 100644
index 0000000..dced27d
--- /dev/null
+++ b/unit_trace/viz/draw.py
@@ -0,0 +1,1377 @@
1#!/usr/bin/python
2
3import math
4import cairo
5import os
6import copy
7
8import util
9import schedule
10from format import *
11
12def snap(pos):
13 """Takes in an x- or y-coordinate ``pos'' and snaps it to the pixel grid.
14 This is necessary because integer coordinates in Cairo actually denote
15 the spaces between pixels, not the pixels themselves, so if we draw a
16 line of width 1 on integer coordinates, it will come out blurry unless we shift it,
17 since the line will get distributed over two pixels. We actually apply this to all
18 coordinates to make sure everything is aligned."""
19 return pos
20
21class Surface(object):
22 def __init__(self, fname='temp', ctx=None):
23 self.virt_x = 0
24 self.virt_y = 0
25 self.surface = None
26 self.width = 0
27 self.height = 0
28 self.fname = fname
29 self.ctx = ctx
30
31 def renew(self, width, height):
32 raise NotImplementedError
33
34 def change_ctx(self, ctx):
35 self.ctx = ctx
36
37 def get_fname(self):
38 return self.fname
39
40 def write_out(self, fname):
41 raise NotImplementedError
42
43 def pan(self, x, y, width, height):
44 """A surface might actually represent just a ``window'' into
45 what we are drawing on. For instance, if we are scrolling through
46 a graph, then the surface represents the area in the GUI window,
47 not the entire graph (visible or not). So this method basically
48 moves the ``window's'' upper-left corner to (x, y), and resizes
49 the dimensions to (width, height)."""
50 self.virt_x = x
51 self.virt_y = y
52 self.width = width
53 self.height = height
54
55 def get_real_coor(self, x, y):
56 """Translates the coordinates (x, y)
57 in the ``theoretical'' plane to the true (x, y) coordinates on this surface
58 that we should draw to. Note that these might actually be outside the
59 bounds of the surface,
60 if we want something outside the surface's ``window''."""
61 return (x - self.virt_x, y - self.virt_y)
62
63class SVGSurface(Surface):
64 def renew(self, width, height):
65 iwidth = int(math.ceil(width))
66 iheight = int(math.ceil(height))
67 self.surface = cairo.SVGSurface(self.fname, iwidth, iheight)
68 self.ctx = cairo.Context(self.surface)
69
70 def write_out(self, fname):
71 os.execl('cp', self.fname, fname)
72
73class ImageSurface(Surface):
74 def renew(self, width, height):
75 iwidth = int(math.ceil(width))
76 iheight = int(math.ceil(height))
77 self.surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, iwidth, iheight)
78 self.ctx = cairo.Context(self.surface)
79
80 def write_out(self, fname):
81 if self.surface is None:
82 raise ValueError('Don\'t own surface, can\'t write to to file')
83
84 self.surface.write_to_png(fname)
85
86class Pattern(object):
87 DEF_STRIPE_SIZE = 10
88 MAX_FADE_WIDTH = 250
89
90 def __init__(self, color_list, stripe_size=DEF_STRIPE_SIZE):
91 self.color_list = color_list
92 self.stripe_size = stripe_size
93
94 def render_on_canvas(self, canvas, x, y, width, height, fade=False):
95 fade_span = min(width, Pattern.MAX_FADE_WIDTH)
96
97 if len(self.color_list) == 1:
98 if fade:
99 canvas.fill_rect_fade(x, y, fade_span, height, (1.0, 1.0, 1.0), \
100 self.color_list[0])
101 else:
102 canvas.fill_rect(x, y, width, height, self.color_list[0])
103
104 if width > Pattern.MAX_FADE_WIDTH:
105 canvas.fill_rect(x + Pattern.MAX_FADE_WIDTH, y, width - Pattern.MAX_FADE_WIDTH,
106 height, self.color_list[0])
107 else:
108 n = 0
109 bottom = y + height
110 while y < bottom:
111 i = n % len(self.color_list)
112 if fade:
113 canvas.fill_rect_fade(x, y, fade_span, \
114 min(self.stripe_size, bottom - y), (1.0, 1.0, 1.0), self.color_list[i])
115 else:
116 canvas.fill_rect(x, y, width, min(self.stripe_size, bottom - y), self.color_list[i])
117
118 if width > Pattern.MAX_FADE_WIDTH:
119 canvas.fill_rect(x + Pattern.MAX_FADE_WIDTH, y, width - Pattern.MAX_FADE_WIDTH,
120 min(self.stripe_size, bottom - y), self.color_list[i])
121
122 y += self.stripe_size
123 n += 1
124
125class Canvas(object):
126 """This is a basic class that stores and draws on a Cairo surface,
127 using various primitives related to drawing a real-time graph (up-arrows,
128 down-arrows, bars, ...).
129
130 This is the lowest-level representation (aside perhaps from the Cairo
131 surface itself) of a real-time graph. It allows the user to draw
132 primitives at certain locations, but for the most part does not know
133 anything about real-time scheduling, just how to draw the basic parts
134 that make up a schedule graph. For that, see Graph or its descendants."""
135
136 BOTTOM_LAYER = 0
137 MIDDLE_LAYER = 1
138 TOP_LAYER = 2
139
140 LAYERS = (BOTTOM_LAYER, MIDDLE_LAYER, TOP_LAYER)
141
142 NULL_PATTERN = -1
143
144 SQRT3 = math.sqrt(3.0)
145
146 def __init__(self, width, height, item_clist, bar_plist, surface):
147 """Creates a new Canvas of dimensions (width, height). The
148 parameters ``item_plist'' and ``bar_plist'' each specify a list
149 of patterns to choose from when drawing the items on the y-axis
150 or filling in bars, respectively."""
151
152 self.surface = surface
153
154 self.width = int(math.ceil(width))
155 self.height = int(math.ceil(height))
156 self.item_clist = item_clist
157 self.bar_plist = bar_plist
158
159 self.selectable_regions = {}
160
161 self.scale = 1.0
162
163 # clears the canvas.
164 def clear(self):
165 raise NotImplementedError
166
167 def scaled(self, *coors):
168 return [coor * self.scale for coor in coors]
169
170 def draw_rect(self, x, y, width, height, color, thickness, snap=True):
171 """Draws a rectangle somewhere (border only)."""
172 raise NotImplementedError
173
174 def fill_rect(self, x, y, width, height, color, snap=True):
175 """Draws a filled rectangle somewhere. ``color'' is a 3-tuple."""
176 raise NotImplementedError
177
178 def fill_rect_fade(self, x, y, width, height, lcolor, rcolor, snap=True):
179 """Draws a rectangle somewhere, filled in with the fade."""
180 raise NotImplementedError
181
182 def draw_line(self, p0, p1, color, thickness, snap=True):
183 """Draws a line from p0 to p1 with a certain color and thickness."""
184 raise NotImplementedError
185
186 def draw_polyline(self, coor_list, color, thickness, snap=True):
187 """Draws a polyline, where coor_list = [(x_0, y_0), (x_1, y_1), ... (x_m, y_m)]
188 specifies a polyline from (x_0, y_0) to (x_1, y_1), etc."""
189 raise NotImplementedError
190
191 def fill_polyline(self, coor_list, color, thickness, snap=True):
192 """Draws a polyline (probably a polygon) and fills it."""
193 raise NotImplementedError
194
195 def draw_label(self, text, x, y, fopts=GraphFormat.DEF_FOPTS_LABEL,
196 halign=AlignMode.LEFT, valign=AlignMode.BOTTOM, snap=True):
197 """Draws text at a position with a certain alignment."""
198 raise NotImplementedError
199
200 def draw_label_with_sscripts(self, text, supscript, subscript, x, y, \
201 textfopts=GraphFormat.DEF_FOPTS_LABEL,
202 sscriptfopts=GraphFormat.DEF_FOPTS_LABEL_SSCRIPT, \
203 halign=AlignMode.LEFT, valign=AlignMode.BOTTOM, snap=True):
204 """Draws text at a position with a certain alignment, along with optionally a superscript and
205 subscript (which are None if either is not used.)"""
206 raise NotImplementedError
207
208 def draw_y_axis(self, x, y, height):
209 """Draws the y-axis, starting from the bottom at the point x, y."""
210 self.surface.ctx.set_source_rgb(0.0, 0.0, 0.0)
211
212 self.draw_line((x, y), (x, y - height), (0.0, 0.0, 0.0), GraphFormat.AXIS_THICKNESS)
213
214 def draw_y_axis_labels(self, x, y, height, item_list, item_size, fopts=None):
215 """Draws the item labels on the y-axis. ``item_list'' is the list
216 of strings to print, while item_size gives the vertical amount of
217 space that each item shall take up, in pixels."""
218 if fopts is None:
219 fopts = GraphFormat.DEF_FOPTS_ITEM
220
221 x -= GraphFormat.Y_AXIS_ITEM_GAP
222 y -= height - item_size / 2.0
223
224 orig_color = fopts.color
225 for ctr, item in enumerate(item_list):
226 fopts.color = self.get_item_color(ctr)
227 self.draw_label(item, x, y, fopts, AlignMode.RIGHT, AlignMode.CENTER)
228 y += item_size
229
230 fopts.color = orig_color
231
232 def draw_x_axis(self, x, y, start_tick, end_tick, maj_sep, min_per_maj):
233 """Draws the x-axis, including all the major and minor ticks (but not the labels).
234 ``num_maj'' gives the number of major ticks, ``maj_sep'' the number of pixels between
235 major ticks, and ``min_per_maj'' the number of minor ticks between two major ticks
236 (including the first major tick)"""
237 self.draw_line((x, y), (x + GraphFormat.X_AXIS_MEASURE_OFS, y),
238 (0.0, 0.0, 0.0), GraphFormat.AXIS_THICKNESS)
239 x += GraphFormat.X_AXIS_MEASURE_OFS + start_tick * maj_sep
240
241 for i in range(start_tick, end_tick + 1):
242 self.draw_line((x, y), (x, y + GraphFormat.MAJ_TICK_SIZE),
243 (0.0, 0.0, 0.0), GraphFormat.AXIS_THICKNESS)
244
245 if (i < end_tick):
246 for j in range(0, min_per_maj):
247 self.draw_line((x, y), (x + maj_sep / min_per_maj, y),
248 (0.0, 0.0, 0.0), GraphFormat.AXIS_THICKNESS)
249
250 x += 1.0 * maj_sep / min_per_maj
251 if j < min_per_maj - 1:
252 self.draw_line((x, y), (x, y + GraphFormat.MIN_TICK_SIZE),
253 (0.0, 0.0, 0.0), GraphFormat.AXIS_THICKNESS)
254
255 def draw_x_axis_labels(self, x, y, start_tick, end_tick, maj_sep, min_per_maj, start=0, incr=1, show_min=False, \
256 majfopts=GraphFormat.DEF_FOPTS_MAJ, minfopts=GraphFormat.DEF_FOPTS_MIN):
257 """Draws the labels for the x-axis. (x, y) should give the origin.
258 how far down you want the text. ``incr'' gives the increment per major
259 tick. ``start'' gives the value of the first tick. ``show_min'' specifies
260 whether to draw labels at minor ticks."""
261
262 x += GraphFormat.X_AXIS_MEASURE_OFS + start_tick * maj_sep
263 y += GraphFormat.X_AXIS_LABEL_GAP + GraphFormat.MAJ_TICK_SIZE
264
265 minincr = incr / (min_per_maj * 1.0)
266
267 cur = start * 1.0
268
269 for i in range(start_tick, end_tick + 1):
270 text = util.format_float(cur, 2)
271 self.draw_label(text, x, y, majfopts, AlignMode.CENTER, AlignMode.TOP)
272
273 if (i < end_tick):
274 if show_min:
275 for j in range(0, min_per_maj):
276 x += 1.0 * maj_sep / min_per_maj
277 cur += minincr
278 text = util.format_float(cur, 2)
279
280 if j < min_per_maj - 1:
281 self.draw_label(text, x, y, minfopts, AlignMode.CENTER, AlignMode.TOP)
282 else:
283 x += maj_sep
284 cur += incr
285
286 def draw_grid(self, x, y, height, start_tick, end_tick, start_item, end_item, maj_sep, item_size, \
287 min_per_maj=None, show_min=False):
288 """Draws a grid dividing along the item boundaries and the major ticks.
289 (x, y) gives the origin. ``show_min'' specifies whether to draw vertical grid lines at minor ticks.
290 ``start_tick'' and ``end_tick'' give the major ticks to start and end at for drawing vertical lines.
291 ``start_item'' and ``end_item'' give the item boundaries to start and end drawing horizontal lines."""
292 if start_tick > end_tick or start_item > end_item:
293 return
294
295 line_width = (end_tick - start_tick) * maj_sep
296 line_height = (end_item - start_item) * item_size
297
298 origin = (x, y)
299
300 # draw horizontal lines first
301 x = origin[0] + GraphFormat.X_AXIS_MEASURE_OFS + start_tick * maj_sep
302 y = origin[1] - height + start_item * item_size
303 for i in range(start_item, end_item + 1):
304 self.draw_line((x, y), (x + line_width, y), GraphFormat.GRID_COLOR, GraphFormat.GRID_THICKNESS)
305 y += item_size
306
307 x = origin[0] + GraphFormat.X_AXIS_MEASURE_OFS + start_tick * maj_sep
308 y = origin[1] - height + start_item * item_size
309
310 if show_min:
311 for i in range(0, (end_tick - start_tick) * min_per_maj + 1):
312 self.draw_line((x, y), (x, y + line_height), GraphFormat.GRID_COLOR, GraphFormat.GRID_THICKNESS)
313 x += maj_sep * 1.0 / min_per_maj
314 else:
315 for i in range(start_tick, end_tick + 1):
316 self.draw_line((x, y), (x, y + line_height), GraphFormat.GRID_COLOR, GraphFormat.GRID_THICKNESS)
317 x += maj_sep
318
319 def _draw_bar_border_common(self, x, y, width, height, color, thickness, clip_side):
320 if clip_side is None:
321 self.draw_rect(x, y, width, height, color, thickness)
322 elif clip_side == AlignMode.LEFT:
323 self.draw_polyline([(x, y), (x + width, y), (x + width, y + height), (x, y + height)],
324 color, thickness)
325 elif clip_side == AlignMode.RIGHT:
326 self.draw_polyline([(x + width, y), (x, y), (x, y + height), (x + width, y + height)],
327 color, thickness)
328
329 def draw_bar(self, x, y, width, height, n, clip_side, selected):
330 """Draws a bar with a certain set of dimensions, using pattern ``n'' from the
331 bar pattern list."""
332
333 color, thickness = {False : (GraphFormat.BORDER_COLOR, GraphFormat.BORDER_THICKNESS),
334 True : (GraphFormat.HIGHLIGHT_COLOR, GraphFormat.BORDER_THICKNESS * 2.0)}[selected]
335
336 # use a pattern to be pretty
337 self.get_bar_pattern(n).render_on_canvas(self, x, y, width, height, True)
338
339 self._draw_bar_border_common(x, y, width, height, color, thickness, clip_side)
340
341 def add_sel_bar(self, x, y, width, height, event):
342 self.add_sel_region(SelectableRegion(x, y, width, height, event))
343
344 def draw_mini_bar(self, x, y, width, height, n, clip_side, selected):
345 """Like the above, except it draws a miniature version. This is usually used for
346 secondary purposes (i.e. to show jobs that _should_ have been running at a certain time).
347
348 Of course we don't enforce the fact that this is mini, since the user can pass in width
349 and height (but the mini bars do look slightly different: namely the borders are a different
350 color)"""
351
352 color, thickness = {False : (GraphFormat.LITE_BORDER_COLOR, GraphFormat.BORDER_THICKNESS),
353 True : (GraphFormat.HIGHLIGHT_COLOR, GraphFormat.BORDER_THICKNESS * 1.5)}[selected]
354
355 self.get_bar_pattern(n).render_on_canvas(self, x, y, width, height, True)
356
357 self._draw_bar_border_common(x, y, width, height, color, thickness, clip_side)
358
359 def add_sel_mini_bar(self, x, y, width, height, event):
360 self.add_sel_region(SelectableRegion(x, y, width, height, event))
361
362 def draw_completion_marker(self, x, y, height, selected):
363 """Draws the symbol that represents a job completion, using a certain height."""
364
365 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
366 self.draw_line((x - height * GraphFormat.TEE_FACTOR / 2.0, y),
367 (x + height * GraphFormat.TEE_FACTOR / 2.0, y),
368 color, GraphFormat.BORDER_THICKNESS)
369 self.draw_line((x, y), (x, y + height), color, GraphFormat.BORDER_THICKNESS)
370
371 def add_sel_completion_marker(self, x, y, height, event):
372 self.add_sel_region(SelectableRegion(x - height * GraphFormat.TEE_FACTOR / 2.0, y,
373 height * GraphFormat.TEE_FACTOR, height, event))
374
375 def draw_release_arrow_big(self, x, y, height, selected):
376 """Draws a release arrow of a certain height: (x, y) should give the top
377 (northernmost point) of the arrow. The height includes the arrowhead."""
378 big_arrowhead_height = GraphFormat.BIG_ARROWHEAD_FACTOR * height
379
380 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
381 colors = [(1.0, 1.0, 1.0), color]
382 draw_funcs = [self.__class__.fill_polyline, self.__class__.draw_polyline]
383 for i in range(0, 2):
384 color = colors[i]
385 draw_func = draw_funcs[i]
386
387 draw_func(self, [(x, y), (x - big_arrowhead_height / Canvas.SQRT3, y + big_arrowhead_height), \
388 (x + big_arrowhead_height / Canvas.SQRT3, y + big_arrowhead_height), (x, y)], \
389 color, GraphFormat.BORDER_THICKNESS)
390
391 self.draw_line((x, y + big_arrowhead_height), (x, y + height), color, GraphFormat.BORDER_THICKNESS)
392
393 def add_sel_release_arrow_big(self, x, y, height, event):
394 self.add_sel_arrow_big(x, y, height, event)
395
396 def draw_deadline_arrow_big(self, x, y, height, selected):
397 """Draws a release arrow: x, y should give the top (northernmost
398 point) of the arrow. The height includes the arrowhead."""
399 big_arrowhead_height = GraphFormat.BIG_ARROWHEAD_FACTOR * height
400
401 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
402 colors = [(1.0, 1.0, 1.0), color]
403 draw_funcs = [self.__class__.fill_polyline, self.__class__.draw_polyline]
404 for i in range(0, 2):
405 color = colors[i]
406 draw_func = draw_funcs[i]
407
408 draw_func(self, [(x, y + height), (x - big_arrowhead_height / Canvas.SQRT3, \
409 y + height - big_arrowhead_height), \
410 (x + big_arrowhead_height / Canvas.SQRT3, \
411 y + height - big_arrowhead_height), \
412 (x, y + height)], color, GraphFormat.BORDER_THICKNESS)
413
414 self.draw_line((x, y), (x, y + height - big_arrowhead_height),
415 color, GraphFormat.BORDER_THICKNESS)
416
417 def add_sel_deadline_arrow_big(self, x, y, height, event):
418 self.add_sel_arrow_big(x, y, height, event)
419
420 def add_sel_arrow_big(self, x, y, height, event):
421 big_arrowhead_height = GraphFormat.BIG_ARROWHEAD_FACTOR * height
422
423 self.add_sel_region(SelectableRegion(x - big_arrowhead_height / Canvas.SQRT3,
424 y, 2.0 * big_arrowhead_height / Canvas.SQRT3, height, event))
425
426 def draw_release_arrow_small(self, x, y, height, selected):
427 """Draws a small release arrow (most likely coming off the x-axis, although
428 this method doesn't enforce this): x, y should give the top of the arrow"""
429 small_arrowhead_height = GraphFormat.SMALL_ARROWHEAD_FACTOR * height
430
431 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
432
433 self.draw_line((x, y), (x - small_arrowhead_height, y + small_arrowhead_height), \
434 color, GraphFormat.BORDER_THICKNESS)
435 self.draw_line((x, y), (x + small_arrowhead_height, y + small_arrowhead_height), \
436 color, GraphFormat.BORDER_THICKNESS)
437 self.draw_line((x, y), (x, y + height), color, GraphFormat.BORDER_THICKNESS)
438
439 def add_sel_release_arrow_small(self, x, y, height, event):
440 self.add_sel_arrow_small(x, y, height, event)
441
442 def draw_deadline_arrow_small(self, x, y, height, selected):
443 """Draws a small deadline arrow (most likely coming off the x-axis, although
444 this method doesn't enforce this): x, y should give the top of the arrow"""
445 small_arrowhead_height = GraphFormat.SMALL_ARROWHEAD_FACTOR * height
446
447 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
448
449 self.draw_line((x, y), (x, y + height), color, GraphFormat.BORDER_THICKNESS)
450 self.draw_line((x - small_arrowhead_height, y + height - small_arrowhead_height), \
451 (x, y + height), color, GraphFormat.BORDER_THICKNESS)
452 self.draw_line((x + small_arrowhead_height, y + height - small_arrowhead_height), \
453 (x, y + height), color, GraphFormat.BORDER_THICKNESS)
454
455 def add_sel_deadline_arrow_small(self, x, y, height, event):
456 self.add_sel_arrow_small(x, y, height, event)
457
458 def add_sel_arrow_small(self, x, y, height, event):
459 small_arrowhead_height = GraphFormat.SMALL_ARROWHEAD_FACTOR * height
460
461 self.add_sel_region(SelectableRegion(x - small_arrowhead_height, y,
462 small_arrowhead_height * 2.0, height, event))
463
464 def draw_suspend_triangle(self, x, y, height, selected):
465 """Draws the triangle that marks a suspension. (x, y) gives the topmost (northernmost) point
466 of the symbol."""
467
468 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
469 colors = [(0.0, 0.0, 0.0), color]
470
471 draw_funcs = [self.__class__.fill_polyline, self.__class__.draw_polyline]
472 for i in range(0, 2):
473 color = colors[i]
474 draw_func = draw_funcs[i]
475 draw_func(self, [(x, y), (x + height / 2.0, y + height / 2.0), (x, y + height), (x, y)], \
476 color, GraphFormat.BORDER_THICKNESS)
477
478 def add_sel_suspend_triangle(self, x, y, height, event):
479 self.add_sel_region(SelectableRegion(x, y, height / 2.0, height, event))
480
481 def draw_resume_triangle(self, x, y, height, selected):
482 """Draws the triangle that marks a resumption. (x, y) gives the topmost (northernmost) point
483 of the symbol."""
484
485 color = {False : GraphFormat.BORDER_COLOR, True : GraphFormat.HIGHLIGHT_COLOR}[selected]
486 colors = [(1.0, 1.0, 1.0), color]
487
488 draw_funcs = [self.__class__.fill_polyline, self.__class__.draw_polyline]
489 for i in range(0, 2):
490 color = colors[i]
491 draw_func = draw_funcs[i]
492 draw_func(self, [(x, y), (x - height / 2.0, y + height / 2.0), (x, y + height), (x, y)], \
493 color, GraphFormat.BORDER_THICKNESS)
494
495 def add_sel_resume_triangle(self, x, y, height, event):
496 self.add_sel_region(SelectableRegion(x - height / 2.0, y, height / 2.0, height, event))
497
498 def clear_selectable_regions(self, real_x, real_y, width, height):
499 x = real_x + self.surface.virt_x
500 y = real_y + self.surface.virt_y
501 for event in self.selectable_regions.keys():
502 if self.selectable_regions[event].intersects(x, y, width, height):
503 del self.selectable_regions[event]
504
505 def add_sel_region(self, region):
506 self.selectable_regions[region.get_event()] = region
507
508 def get_sel_region(self, event):
509 return self.selectable_regions[event]
510
511 def get_selected_regions(self, real_x, real_y, width, height):
512 x = real_x + self.surface.virt_x
513 y = real_y + self.surface.virt_y
514
515 selected = {}
516 for event in self.selectable_regions:
517 region = self.selectable_regions[event]
518 if region.intersects(x, y, width, height):
519 selected[event] = region
520
521 return selected
522
523 def whiteout(self, real_x, real_y, width, height):
524 """Overwrites the surface completely white, but technically doesn't delete anything"""
525 self.fill_rect(self.surface.virt_x + real_x, self.surface.virt_y + real_y, width,
526 height, (1.0, 1.0, 1.0), False)
527
528 def get_item_color(self, n):
529 """Gets the nth color in the item color list, which are the colors used to draw the items
530 on the y-axis. Note that there are conceptually infinitely
531 many patterns because the patterns repeat -- that is, we just mod out by the size of the pattern
532 list when indexing."""
533 return self.item_clist[n % len(self.item_clist)]
534
535 def get_bar_pattern(self, n):
536 """Gets the nth pattern in the bar pattern list, which is a list of surfaces that are used to
537 fill in the bars. Note that there are conceptually infinitely
538 many patterns because the patterns repeat -- that is, we just mod out by the size of the pattern
539 list when indexing."""
540 if n < 0:
541 return self.bar_plist[-1]
542 return self.bar_plist[n % (len(self.bar_plist) - 1)]
543
544class CairoCanvas(Canvas):
545 """This is a basic class that stores and draws on a Cairo surface,
546 using various primitives related to drawing a real-time graph (up-arrows,
547 down-arrows, bars, ...).
548
549 This is the lowest-level non-abstract representation
550 (aside perhaps from the Cairo surface itself) of a real-time graph.
551 It allows the user to draw primitives at certain locations, but for
552 the most part does not know anything about real-time scheduling,
553 just how to draw the basic parts that make up a schedule graph.
554 For that, see Graph or its descendants."""
555
556 #def __init__(self, fname, width, height, item_clist, bar_plist, surface):
557 # """Creates a new Canvas of dimensions (width, height). The
558 # parameters ``item_plist'' and ``bar_plist'' each specify a list
559 # of patterns to choose from when drawing the items on the y-axis
560 # or filling in bars, respectively."""
561
562 # super(CairoCanvas, self).__init__(fname, width, height, item_clist, bar_plist, surface)
563
564 #def clear(self):
565 # self.surface = self.SurfaceType(self.width, self.height, self.fname)
566 # self.whiteout()
567
568 def get_surface(self):
569 """Gets the Surface that we are drawing on in its current state."""
570 return self.surface
571
572 def _rect_common(self, x, y, width, height, color, thickness, do_snap=True):
573 x, y, width, height = self.scaled(x, y, width, height)
574 x, y = self.surface.get_real_coor(x, y)
575 if do_snap:
576 self.surface.ctx.rectangle(snap(x), snap(y), width, height)
577 else:
578 self.surface.ctx.rectangle(x, y, width, height)
579
580 self.surface.ctx.set_line_width(thickness * self.scale)
581 self.surface.ctx.set_source_rgb(color[0], color[1], color[2])
582
583 def draw_rect(self, x, y, width, height, color, thickness, do_snap=True):
584 self._rect_common(x, y, width, height, color, thickness, do_snap)
585 self.surface.ctx.stroke()
586
587 def fill_rect(self, x, y, width, height, color, do_snap=True):
588 self._rect_common(x, y, width, height, color, 1, do_snap)
589 self.surface.ctx.fill()
590
591 def fill_rect_fade(self, x, y, width, height, lcolor, rcolor, do_snap=True):
592 """Draws a rectangle somewhere, filled in with the fade."""
593 x, y, width, height = self.scaled(x, y, width, height)
594 x, y = self.surface.get_real_coor(x, y)
595
596 if do_snap:
597 linear = cairo.LinearGradient(snap(x), snap(y), \
598 snap(x + width), snap(y + height))
599 else:
600 linear = cairo.LinearGradient(x, y, \
601 x + width, y + height)
602 linear.add_color_stop_rgb(0.0, lcolor[0], lcolor[1], lcolor[2])
603 linear.add_color_stop_rgb(1.0, rcolor[0], rcolor[1], rcolor[2])
604 self.surface.ctx.set_source(linear)
605 if do_snap:
606 self.surface.ctx.rectangle(snap(x), snap(y), width, height)
607 else:
608 self.surface.ctx.rectangle(snap(x), snap(y), width, height)
609 self.surface.ctx.fill()
610
611 def draw_line(self, p0, p1, color, thickness, do_snap=True):
612 """Draws a line from p0 to p1 with a certain color and thickness."""
613 p0 = self.scaled(p0[0], p0[1])
614 p0 = self.surface.get_real_coor(p0[0], p0[1])
615 p1 = self.scaled(p1[0], p1[1])
616 p1 = self.surface.get_real_coor(p1[0], p1[1])
617 if do_snap:
618 p0 = (snap(p0[0]), snap(p0[1]))
619 p1 = (snap(p1[0]), snap(p1[1]))
620
621 self.surface.ctx.move_to(p0[0], p0[1])
622 self.surface.ctx.line_to(p1[0], p1[1])
623 self.surface.ctx.set_source_rgb(color[0], color[1], color[2])
624 self.surface.ctx.set_line_width(thickness * self.scale)
625 self.surface.ctx.stroke()
626
627 def _polyline_common(self, coor_list, color, thickness, do_snap=True):
628 real_coor_list = [self.surface.get_real_coor(coor[0], coor[1]) for coor in coor_list]
629 self.surface.ctx.move_to(real_coor_list[0][0], real_coor_list[0][1])
630 if do_snap:
631 for i in range(0, len(real_coor_list)):
632 real_coor_list[i] = (snap(real_coor_list[i][0]), snap(real_coor_list[i][1]))
633
634 for coor in real_coor_list[1:]:
635 self.surface.ctx.line_to(coor[0], coor[1])
636
637 self.surface.ctx.set_line_width(thickness)
638 self.surface.ctx.set_source_rgb(color[0], color[1], color[2])
639
640 def draw_polyline(self, coor_list, color, thickness, do_snap=True):
641 self._polyline_common(coor_list, color, thickness, do_snap)
642 self.surface.ctx.stroke()
643
644 def fill_polyline(self, coor_list, color, thickness, do_snap=True):
645 self._polyline_common(coor_list, color, thickness, do_snap)
646 self.surface.ctx.fill()
647
648 def _draw_label_common(self, text, x, y, fopts, x_bearing_factor, \
649 f_descent_factor, width_factor, f_height_factor, do_snap=True):
650 """Helper function for drawing a label with some alignment. Instead of taking in an alignment,
651 it takes in the scale factor for the font extent parameters, which give the raw data of how much to adjust
652 the x and y parameters. Only should be used internally."""
653 x, y = self.scaled(x, y)
654 x, y = self.surface.get_real_coor(x, y)
655
656 self.surface.ctx.set_source_rgb(0.0, 0.0, 0.0)
657
658 self.surface.ctx.select_font_face(fopts.name, cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD)
659 self.surface.ctx.set_font_size(fopts.size)
660
661 fe = self.surface.ctx.font_extents()
662 f_ascent, f_descent, f_height = fe[:3]
663
664 te = self.surface.ctx.text_extents(text)
665 x_bearing, y_bearing, width, height = te[:4]
666
667 actual_x = x - x_bearing * x_bearing_factor - width * width_factor
668 actual_y = y - f_descent * f_descent_factor + f_height * f_height_factor
669
670 self.surface.ctx.set_source_rgb(fopts.color[0], fopts.color[1], fopts.color[2])
671
672 if do_snap:
673 self.surface.ctx.move_to(snap(actual_x), snap(actual_y))
674 else:
675 self.surface.ctx.move_to(actual_x, actual_y)
676
677 self.surface.ctx.show_text(text)
678
679 def draw_label(self, text, x, y, fopts=GraphFormat.DEF_FOPTS_LABEL, halign=AlignMode.LEFT, valign=AlignMode.BOTTOM, do_snap=True):
680 """Draws a label with the given parameters, with the given horizontal and vertical justification. One can override
681 the color from ``fopts'' by passing something in to ``pattern'', which overrides the color with an arbitrary
682 pattern."""
683 x_bearing_factor, f_descent_factor, width_factor, f_height_factor = 0.0, 0.0, 0.0, 0.0
684 halign_factors = {AlignMode.LEFT : (0.0, 0.0), AlignMode.CENTER : (1.0, 0.5), AlignMode.RIGHT : (1.0, 1.0)}
685 if halign not in halign_factors:
686 raise ValueError('Invalid alignment value')
687 x_bearing_factor, width_factor = halign_factors[halign]
688
689 valign_factors = {AlignMode.BOTTOM : (0.0, 0.0), AlignMode.CENTER : (1.0, 0.5), AlignMode.TOP : (1.0, 1.0)}
690 if valign not in valign_factors:
691 raise ValueError('Invalid alignment value')
692 f_descent_factor, f_height_factor = valign_factors[valign]
693
694 self._draw_label_common(text, x, y, fopts, x_bearing_factor, \
695 f_descent_factor, width_factor, f_height_factor, do_snap)
696
697 def draw_label_with_sscripts(self, text, supscript, subscript, x, y, \
698 textfopts=GraphFormat.DEF_FOPTS_LABEL, sscriptfopts=GraphFormat.DEF_FOPTS_LABEL_SSCRIPT, \
699 halign=AlignMode.LEFT, valign=AlignMode.BOTTOM, do_snap=True):
700 """Draws a label, but also optionally allows a superscript and subscript to be rendered."""
701 self.draw_label(text, x, y, textfopts, halign, valign)
702
703 self.surface.ctx.set_source_rgb(0.0, 0.0, 0.0)
704 self.surface.ctx.select_font_face(textfopts.name, cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD)
705 self.surface.ctx.set_font_size(textfopts.size)
706 te = self.surface.ctx.text_extents(text)
707 fe = self.surface.ctx.font_extents()
708 if supscript is not None:
709 f_height = fe[2]
710 x_advance = te[4]
711 xtmp = x + x_advance
712 ytmp = y
713 ytmp = y - f_height / 4.0
714 self.draw_label(supscript, xtmp, ytmp, sscriptfopts, halign, valign, do_snap)
715 if subscript is not None:
716 f_height = fe[2]
717 x_advance = te[4]
718 xtmp = x + x_advance
719 ytmp = y
720 ytmp = y + f_height / 4.0
721 self.draw_label(subscript, xtmp, ytmp, sscriptfopts, halign, valign, do_snap)
722
723# represents a selectable region of the graph
724class SelectableRegion(object):
725 def __init__(self, x, y, width, height, event):
726 self.x = x
727 self.y = y
728 self.width = width
729 self.height = height
730 self.event = event
731
732 def get_dimensions(self):
733 return (self.x, self.y, self.width, self.height)
734
735 def get_event(self):
736 return self.event
737
738 def intersects(self, x, y, width, height):
739 return x <= self.x + self.width and x + width >= self.x and y <= self.y + self.height and y + height >= self.y
740
741class Graph(object):
742 DEF_BAR_PLIST = [Pattern([(0.0, 0.9, 0.9)]), Pattern([(0.9, 0.3, 0.0)]), Pattern([(0.9, 0.7, 0.0)]),
743 Pattern([(0.0, 0.0, 0.8)]), Pattern([(0.0, 0.2, 0.9)]), Pattern([(0.0, 0.6, 0.6)]),
744 Pattern([(0.75, 0.75, 0.75)])]
745 DEF_ITEM_CLIST = [(0.3, 0.0, 0.0), (0.0, 0.3, 0.0), (0.0, 0.0, 0.3), (0.3, 0.3, 0.0), (0.0, 0.3, 0.3),
746 (0.3, 0.0, 0.3)]
747
748 def __init__(self, CanvasType, surface, start_time, end_time, y_item_list, attrs=GraphFormat(),
749 item_clist=DEF_ITEM_CLIST, bar_plist=DEF_BAR_PLIST):
750 # deal with possibly blank schedules
751 if start_time is None:
752 start_time = 0
753 if end_time is None:
754 end_time = 0
755
756 if start_time > end_time:
757 raise ValueError("Litmus is not a time machine")
758
759 self.attrs = attrs
760 self.start_time = start_time
761 self.end_time = end_time
762 self.y_item_list = y_item_list
763 self.num_maj = int(math.ceil((self.end_time - self.start_time) * 1.0 / self.attrs.time_per_maj)) + 1
764
765 width = self.num_maj * self.attrs.maj_sep + GraphFormat.X_AXIS_MEASURE_OFS + GraphFormat.WIDTH_PAD
766 height = (len(self.y_item_list) + 1) * self.attrs.y_item_size + GraphFormat.HEIGHT_PAD
767
768 # We need to stretch the width in order to fit the y-axis labels. To do this we need
769 # the extents information, but we haven't set up a surface yet, so we just use a
770 # temporary one.
771 extra_width = 0.0
772 dummy_surface = surface.__class__()
773 dummy_surface.renew(10, 10)
774
775 dummy_surface.ctx.select_font_face(self.attrs.item_fopts.name, cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD)
776 dummy_surface.ctx.set_font_size(self.attrs.item_fopts.size)
777 for item in self.y_item_list:
778 dummy_surface.ctx.set_source_rgb(0.0, 0.0, 0.0)
779 te = dummy_surface.ctx.text_extents(item)
780 cur_width = te[2]
781 if cur_width > extra_width:
782 extra_width = cur_width
783
784 width += extra_width
785
786 self.origin = (extra_width + GraphFormat.WIDTH_PAD / 2.0, height - GraphFormat.HEIGHT_PAD / 2.0)
787
788 self.width = width
789 self.height = height
790
791 #if surface.ctx is None:
792 # surface.renew(width, height)
793
794 self.canvas = CanvasType(width, height, item_clist, bar_plist, surface)
795
796 def get_selected_regions(self, real_x, real_y, width, height):
797 return self.canvas.get_selected_regions(real_x, real_y, width, height)
798
799 def get_width(self):
800 return self.width
801
802 def get_height(self):
803 return self.height
804
805 def get_origin(self):
806 return self.origin
807
808 def get_attrs(self):
809 return self.attrs
810
811 def add_sel_region(self, region):
812 self.canvas.add_sel_region(region)
813
814 def get_sel_region(self, event):
815 return self.canvas.get_sel_region(event)
816
817 def update_view(self, x, y, width, height, ctx):
818 """Proxy into the surface's pan."""
819 self.canvas.surface.pan(x, y, width, height)
820 self.canvas.surface.change_ctx(ctx)
821
822 def _recomp_min_max(self, start_time, end_time, start_item, end_item):
823 if self.min_time is None or start_time < self.min_time:
824 self.min_time = start_time
825 if self.max_time is None or end_time > self.max_time:
826 self.max_time = end_time
827 if self.min_item is None or start_item < self.min_item:
828 self.min_item = start_item
829 if self.max_item is None or end_item > self.max_item:
830 self.max_item = end_item
831
832 def _get_time_xpos(self, time):
833 """get x so that x is at instant ``time'' on the graph"""
834 return self.origin[0] + GraphFormat.X_AXIS_MEASURE_OFS + 1.0 * (time - self.start_time) / self.attrs.time_per_maj * self.attrs.maj_sep
835
836 def _get_item_ypos(self, item_no):
837 """get y so that y is where the top of a bar would be in item #n's area"""
838 return self.origin[1] - self._get_y_axis_height() + self.attrs.y_item_size * (item_no + 0.5 - GraphFormat.BAR_SIZE_FACTOR / 2.0)
839
840 def _get_bar_width(self, start_time, end_time):
841 return 1.0 * (end_time - start_time) / self.attrs.time_per_maj * self.attrs.maj_sep
842
843 def _get_bar_height(self):
844 return self.attrs.y_item_size * GraphFormat.BAR_SIZE_FACTOR
845
846 def _get_mini_bar_height(self):
847 return self.attrs.y_item_size * GraphFormat.MINI_BAR_SIZE_FACTOR
848
849 def _get_mini_bar_ofs(self):
850 return self.attrs.y_item_size * (GraphFormat.MINI_BAR_SIZE_FACTOR + GraphFormat.BAR_MINI_BAR_GAP_FACTOR)
851
852 def _get_y_axis_height(self):
853 return (len(self.y_item_list) + 1) * self.attrs.y_item_size
854
855 def _get_bottom_tick(self, time):
856 return int(math.floor((time - self.start_time) / self.attrs.time_per_maj))
857
858 def _get_top_tick(self, time):
859 return int(math.ceil((time - self.start_time) / self.attrs.time_per_maj))
860
861 def get_surface(self):
862 """Gets the underlying surface."""
863 return self.canvas.get_surface()
864
865 def xcoor_to_time(self, x):
866 #x = self.origin[0] + GraphFormat.X_AXIS_MEASURE_OFS + (time - self.start) / self.attrs.time_per_maj * self.attrs.maj_sep
867 return (x - self.origin[0] - GraphFormat.X_AXIS_MEASURE_OFS) / self.attrs.maj_sep \
868 * self.attrs.time_per_maj + self.start_time
869
870 def ycoor_to_item_no(self, y):
871 return int((y - self.origin[1] + self._get_y_axis_height()) // self.attrs.y_item_size)
872
873 def get_offset_params(self, real_x, real_y, width, height):
874 start_time = self.xcoor_to_time(self.canvas.surface.virt_x + real_x)
875 end_time = self.xcoor_to_time(self.canvas.surface.virt_x + real_x + width)
876
877 start_item = self.ycoor_to_item_no(self.canvas.surface.virt_y + real_y)
878 end_item = 2 + self.ycoor_to_item_no(self.canvas.surface.virt_y + real_y + height)
879
880 return (start_time, end_time, start_item, end_item)
881
882 def draw_skeleton(self, start_time, end_time, start_item, end_item):
883 self.draw_grid_at_time(start_time, end_time, start_item, end_item)
884 self.draw_x_axis_with_labels_at_time(start_time, end_time)
885 self.draw_y_axis_with_labels()
886
887 def render_surface(self, sched, regions, selectable=False):
888 raise NotImplementedError
889
890 def render_all(self, schedule):
891 raise NotImplementedError
892
893 def render_events(self, event_list):
894 for layer in Canvas.LAYERS:
895 prev_events = {}
896 for event in event_list:
897 event.render(self, layer, prev_events)
898
899 def draw_axes(self, x_axis_label, y_axis_label):
900 """Draws and labels the axes according to the parameters that we were initialized
901 with."""
902 self.draw_grid_at_time(self.start_time, self.end_time, 0, len(self.attrs.y_item_list) - 1)
903
904 self.canvas.draw_x_axis(self.origin[0], self.origin[1], self.num_maj, self.attrs.maj_sep, self.attrs.min_per_maj)
905 self.canvas.draw_y_axis(self.origin[0], self.origin[1], self._get_y_axis_height())
906 self.canvas.draw_x_axis_labels(self.origin[0], self.origin[1], 0, self.num_maj - 1,\
907 self.attrs.maj_sep, self.attrs.min_per_maj, self.start_time, \
908 self.attrs.time_per_maj, self.attrs.show_min, self.attrs.majfopts, self.attrs.minfopts)
909 self.canvas.draw_y_axis_labels(self.origin[0], self.origin[1], self._get_y_axis_height(), self.y_item_list, \
910 self.attrs.y_item_size, self.attrs.item_fopts)
911
912 def draw_grid_at_time(self, start_time, end_time, start_item, end_item):
913 """Draws the grid, but only in a certain time and item range."""
914 start_tick = max(0, self._get_bottom_tick(start_time))
915 end_tick = min(self.num_maj - 1, self._get_top_tick(end_time))
916
917 start_item = max(0, start_item)
918 end_item = min(len(self.y_item_list), end_item)
919
920 self.canvas.draw_grid(self.origin[0], self.origin[1], self._get_y_axis_height(),
921 start_tick, end_tick, start_item, end_item, self.attrs.maj_sep, self.attrs.y_item_size, \
922 self.attrs.min_per_maj, True)
923
924 def draw_x_axis_with_labels_at_time(self, start_time, end_time):
925 start_tick = max(0, self._get_bottom_tick(start_time))
926 end_tick = min(self.num_maj - 1, self._get_top_tick(end_time))
927
928 self.canvas.draw_x_axis(self.origin[0], self.origin[1], start_tick, end_tick, \
929 self.attrs.maj_sep, self.attrs.min_per_maj)
930 self.canvas.draw_x_axis_labels(self.origin[0], self.origin[1], start_tick, \
931 end_tick, self.attrs.maj_sep, self.attrs.min_per_maj,
932 self.start_time + start_tick * self.attrs.time_per_maj,
933 self.attrs.time_per_maj, False)
934
935 def draw_y_axis_with_labels(self):
936 self.canvas.draw_y_axis(self.origin[0], self.origin[1], self._get_y_axis_height())
937 self.canvas.draw_y_axis_labels(self.origin[0], self.origin[1], self._get_y_axis_height(), \
938 self.y_item_list, self.attrs.y_item_size)
939
940 def draw_suspend_triangle_at_time(self, time, task_no, cpu_no, selected=False):
941 """Draws a suspension symbol for a certain task at an instant in time."""
942 raise NotImplementedError
943
944 def add_sel_suspend_triangle_at_time(self, time, task_no, cpu_no, event):
945 """Same as above, except instead of drawing adds a selectable region at
946 a certain time."""
947 raise NotImplementedError
948
949 def draw_resume_triangle_at_time(self, time, task_no, cpu_no, selected=False):
950 """Draws a resumption symbol for a certain task at an instant in time."""
951 raise NotImplementedError
952
953 def add_sel_resume_triangle_at_time(self, time, task_no, cpu_no, event):
954 """Same as above, except instead of drawing adds a selectable region at
955 a certain time."""
956 raise NotImplementedError
957
958 def draw_completion_marker_at_time(self, time, task_no, cpu_no, selected=False):
959 """Draws a completion marker for a certain task at an instant in time."""
960 raise NotImplementedError
961
962 def add_sel_completion_marker_at_time(self, time, task_no, cpu_no, event):
963 """Same as above, except instead of drawing adds a selectable region at
964 a certain time."""
965 raise NotImplementedError
966
967 def draw_release_arrow_at_time(self, time, task_no, job_no, selected=False):
968 """Draws a release arrow at a certain time for some task and job"""
969 raise NotImplementedError
970
971 def add_sel_release_arrow_at_time(self, time, task_no, event):
972 """Same as above, except instead of drawing adds a selectable region at
973 a certain time."""
974 raise NotImplementedError
975
976 def draw_deadline_arrow_at_time(self, time, task_no, job_no, selected=False):
977 """Draws a deadline arrow at a certain time for some task and job"""
978 raise NotImplementedError
979
980 def add_sel_deadline_arrow_at_time(self, time, task_no, event):
981 """Same as above, except instead of drawing adds a selectable region at
982 a certain time."""
983 raise NotImplementedError
984
985 def draw_bar_at_time(self, start_time, end_time, task_no, cpu_no, job_no=None, clip_side=None):
986 """Draws a bar over a certain time period for some task, optionally labelling it."""
987 raise NotImplementedError
988
989 def add_sel_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
990 """Same as above, except instead of drawing adds a selectable region at
991 a certain time."""
992 raise NotImplementedError
993
994 def draw_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, clip_side=None, job_no=None):
995 """Draws a mini bar over a certain time period for some task, optionally labelling it."""
996 raise NotImplementedError
997
998 def add_sel_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
999 """Same as above, except instead of drawing adds a selectable region at
1000 a certain time."""
1001 raise NotImplementedError
1002
1003class TaskGraph(Graph):
1004 def render_surface(self, sched, regions, selectable=False):
1005 events_to_render = {}
1006 for layer in Canvas.LAYERS:
1007 events_to_render[layer] = {}
1008
1009 for region in regions:
1010 x, y, width, height = region
1011 if not selectable:
1012 self.canvas.whiteout(x, y, width, height)
1013 else:
1014 self.canvas.clear_selectable_regions(x, y, width, height)
1015
1016 self.min_time, self.max_time, self.min_item, self.max_item = None, None, None, None
1017 for region in regions:
1018 x, y, width, height = region
1019 start_time, end_time, start_item, end_item = self.get_offset_params(x, y, width, height)
1020 self._recomp_min_max(start_time, end_time, start_item, end_item)
1021
1022 for event in sched.get_time_slot_array().iter_over_period(
1023 start_time, end_time, start_item, end_item,
1024 schedule.TimeSlotArray.TASK_LIST, schedule.EVENT_LIST):
1025 events_to_render[event.get_layer()][event] = None
1026
1027 if not selectable:
1028 self.draw_skeleton(self.min_time, self.max_time,
1029 self.min_item, self.max_item)
1030
1031 #if not selectable:
1032 # for layer in events_to_render:
1033 # print 'task render on layer', layer, ':', [str(e) for e in events_to_render[layer].keys()]
1034
1035 for layer in Canvas.LAYERS:
1036 prev_events = {}
1037 for event in events_to_render[layer]:
1038 event.render(self, layer, prev_events, selectable)
1039
1040 def draw_suspend_triangle_at_time(self, time, task_no, cpu_no, selected=False):
1041 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1042 x = self._get_time_xpos(time)
1043 y = self._get_item_ypos(task_no) + self._get_bar_height() / 2.0 - height / 2.0
1044 self.canvas.draw_suspend_triangle(x, y, height, selected)
1045
1046 def add_sel_suspend_triangle_at_time(self, time, task_no, cpu_no, event):
1047 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1048 x = self._get_time_xpos(time)
1049 y = self._get_item_ypos(task_no) + self._get_bar_height() / 2.0 - height / 2.0
1050
1051 self.canvas.add_sel_suspend_triangle(x, y, height, event)
1052
1053 def draw_resume_triangle_at_time(self, time, task_no, cpu_no, selected=False):
1054 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1055 x = self._get_time_xpos(time)
1056 y = self._get_item_ypos(task_no) + self._get_bar_height() / 2.0 - height / 2.0
1057
1058 self.canvas.draw_resume_triangle(x, y, height, selected)
1059
1060 def add_sel_resume_triangle_at_time(self, time, task_no, cpu_no, event):
1061 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1062 x = self._get_time_xpos(time)
1063 y = self._get_item_ypos(task_no) + self._get_bar_height() / 2.0 - height / 2.0
1064
1065 self.canvas.add_sel_resume_triangle(x, y, height, event)
1066
1067 def draw_completion_marker_at_time(self, time, task_no, cpu_no, selected=False):
1068 height = self._get_bar_height() * GraphFormat.COMPLETION_MARKER_FACTOR
1069 x = self._get_time_xpos(time)
1070 y = self._get_item_ypos(task_no) + self._get_bar_height() - height
1071
1072 self.canvas.draw_completion_marker(x, y, height, selected)
1073
1074 def add_sel_completion_marker_at_time(self, time, task_no, cpu_no, event):
1075 height = self._get_bar_height() * GraphFormat.COMPLETION_MARKER_FACTOR
1076
1077 x = self._get_time_xpos(time)
1078 y = self._get_item_ypos(task_no) + self._get_bar_height() - height
1079
1080 self.canvas.add_sel_completion_marker(x, y, height, event)
1081
1082 def draw_release_arrow_at_time(self, time, task_no, job_no=None, selected=False):
1083 height = self._get_bar_height() * GraphFormat.BIG_ARROW_FACTOR
1084
1085 x = self._get_time_xpos(time)
1086 y = self._get_item_ypos(task_no) + self._get_bar_height() - height
1087
1088 self.canvas.draw_release_arrow_big(x, y, height, selected)
1089
1090 def add_sel_release_arrow_at_time(self, time, task_no, event):
1091 height = self._get_bar_height() * GraphFormat.BIG_ARROW_FACTOR
1092
1093 x = self._get_time_xpos(time)
1094 y = self._get_item_ypos(task_no) + self._get_bar_height() - height
1095
1096 self.canvas.add_sel_release_arrow_big(x, y, height, event)
1097
1098 def draw_deadline_arrow_at_time(self, time, task_no, job_no=None, selected=False):
1099 height = self._get_bar_height() * GraphFormat.BIG_ARROW_FACTOR
1100
1101 x = self._get_time_xpos(time)
1102 y = self._get_item_ypos(task_no)
1103
1104 self.canvas.draw_deadline_arrow_big(x, y, height, selected)
1105
1106 def add_sel_deadline_arrow_at_time(self, time, task_no, event):
1107 height = self._get_bar_height() * GraphFormat.BIG_ARROW_FACTOR
1108
1109 x = self._get_time_xpos(time)
1110 y = self._get_item_ypos(task_no)
1111
1112 self.canvas.add_sel_deadline_arrow_big(x, y, height, event)
1113
1114 def draw_bar_at_time(self, start_time, end_time, task_no, cpu_no, job_no=None, clip_side=None, selected=False):
1115 if start_time > end_time:
1116 raise ValueError("Litmus is not a time machine")
1117
1118 x = self._get_time_xpos(start_time)
1119 y = self._get_item_ypos(task_no)
1120 width = self._get_bar_width(start_time, end_time)
1121 height = self._get_bar_height()
1122
1123 self.canvas.draw_bar(x, y, width, height, cpu_no, clip_side, selected)
1124
1125 # if a job number is specified, we want to draw a superscript and subscript for the task and job number, respectively
1126 if job_no is not None:
1127 x += GraphFormat.BAR_LABEL_OFS
1128 y += self.attrs.y_item_size * GraphFormat.BAR_SIZE_FACTOR / 2.0
1129 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1130 GraphFormat.DEF_FOPTS_BAR, GraphFormat.DEF_FOPTS_BAR_SSCRIPT, AlignMode.LEFT, AlignMode.CENTER)
1131
1132 def add_sel_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
1133 if start_time > end_time:
1134 raise ValueError("Litmus is not a time machine")
1135
1136 x = self._get_time_xpos(start_time)
1137 y = self._get_item_ypos(task_no)
1138 width = self._get_bar_width(start_time, end_time)
1139 height = self._get_bar_height()
1140
1141 self.canvas.add_sel_bar(x, y, width, height, event)
1142
1143 def draw_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, job_no=None, clip_side=None, selected=False):
1144 if start_time > end_time:
1145 raise ValueError("Litmus is not a time machine")
1146
1147 x = self._get_time_xpos(start_time)
1148 y = self._get_item_ypos(task_no) - self._get_mini_bar_ofs()
1149 width = self._get_bar_width(start_time, end_time)
1150 height = self._get_mini_bar_height()
1151
1152 self.canvas.draw_mini_bar(x, y, width, height, Canvas.NULL_PATTERN, clip_side, selected)
1153
1154 if job_no is not None:
1155 x += GraphFormat.MINI_BAR_LABEL_OFS
1156 y += self.attrs.y_item_size * GraphFormat.MINI_BAR_SIZE_FACTOR / 2.0
1157 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1158 GraphFormat.DEF_FOPTS_MINI_BAR, GraphFormat.DEF_FOPTS_MINI_BAR_SSCRIPT, AlignMode.LEFT, AlignMode.CENTER)
1159
1160 def add_sel_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
1161 x = self._get_time_xpos(start_time)
1162 y = self._get_item_ypos(task_no) - self._get_mini_bar_ofs()
1163 width = self._get_bar_width(start_time, end_time)
1164 height = self._get_mini_bar_height()
1165
1166 self.canvas.add_sel_mini_bar(x, y, width, height, event)
1167
1168class CpuGraph(Graph):
1169 def render_surface(self, sched, regions, selectable=False):
1170 BOTTOM_EVENTS = [schedule.ReleaseEvent, schedule.DeadlineEvent, schedule.InversionStartEvent,
1171 schedule.InversionEndEvent, schedule.InversionDummy]
1172 TOP_EVENTS = [schedule.SuspendEvent, schedule.ResumeEvent, schedule.CompleteEvent,
1173 schedule.SwitchAwayEvent, schedule.SwitchToEvent, schedule.IsRunningDummy]
1174
1175 events_to_render = {}
1176 for layer in Canvas.LAYERS:
1177 events_to_render[layer] = {}
1178
1179 self.min_time, self.max_time, self.min_item, self.max_item = None, None, None, None
1180 for region in regions:
1181 x, y, width, height = region
1182 if not selectable:
1183 #self.canvas.whiteout(x, y, width, height)
1184 self.canvas.whiteout(0, 0, self.canvas.surface.width, self.canvas.surface.height)
1185 else:
1186 self.canvas.clear_selectable_regions(x, y, width, height)
1187
1188 for region in regions:
1189 x, y, width, height = region
1190 start_time, end_time, start_item, end_item = self.get_offset_params(x, y, width, height)
1191 self._recomp_min_max(start_time, end_time, start_item, end_item)
1192
1193 for event in sched.get_time_slot_array().iter_over_period(
1194 start_time, end_time, start_item, end_item,
1195 schedule.TimeSlotArray.CPU_LIST,
1196 TOP_EVENTS):
1197 events_to_render[event.get_layer()][event] = None
1198
1199 if end_item >= len(self.y_item_list):
1200 # we are far down enough that we should render the releases and deadlines and inversions,
1201 # which appear near the x-axis
1202 x, y, width, height = region
1203 for event in sched.get_time_slot_array().iter_over_period(
1204 start_time, end_time, 0, sched.get_num_cpus(),
1205 schedule.TimeSlotArray.CPU_LIST,
1206 BOTTOM_EVENTS):
1207 events_to_render[event.get_layer()][event] = None
1208
1209 if not selectable:
1210 self.draw_skeleton(self.min_time, self.max_time,
1211 self.min_item, self.max_item)
1212
1213 for layer in Canvas.LAYERS:
1214 prev_events = {}
1215 for event in events_to_render[layer]:
1216 event.render(self, layer, prev_events, selectable)
1217
1218 def render(self, schedule, start_time=None, end_time=None):
1219 if end_time < start_time:
1220 raise ValueError('start must be less than end')
1221
1222 if start_time is None:
1223 start_time = self.start
1224 if end_time is None:
1225 end_time = self.end
1226 start_slot = self.get_time_slot(start_time)
1227 end_slot = min(len(self.time_slots), self.get_time_slot(end_time) + 1)
1228
1229 for layer in Canvas.LAYERS:
1230 prev_events = {}
1231 for i in range(start_slot, end_slot):
1232 for event in self.time_slots[i]:
1233 event.render(graph, layer, prev_events)
1234
1235 def draw_suspend_triangle_at_time(self, time, task_no, cpu_no, selected=False):
1236 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1237 x = self._get_time_xpos(time)
1238 y = self._get_item_ypos(cpu_no) + self._get_bar_height() / 2.0 - height / 2.0
1239 self.canvas.draw_suspend_triangle(x, y, height, selected)
1240
1241 def add_sel_suspend_triangle_at_time(self, time, task_no, cpu_no, event):
1242 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1243 x = self._get_time_xpos(time)
1244 y = self._get_item_ypos(cpu_no) + self._get_bar_height() / 2.0 - height / 2.0
1245
1246 self.canvas.add_sel_suspend_triangle(x, y, height, event)
1247
1248 def draw_resume_triangle_at_time(self, time, task_no, cpu_no, selected=False):
1249 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1250 x = self._get_time_xpos(time)
1251 y = self._get_item_ypos(cpu_no) + self._get_bar_height() / 2.0 - height / 2.0
1252
1253 self.canvas.draw_resume_triangle(x, y, height, selected)
1254
1255 def add_sel_resume_triangle_at_time(self, time, task_no, cpu_no, event):
1256 height = self._get_bar_height() * GraphFormat.BLOCK_TRIANGLE_FACTOR
1257 x = self._get_time_xpos(time)
1258 y = self._get_item_ypos(cpu_no) + self._get_bar_height() / 2.0 - height / 2.0
1259
1260 self.canvas.add_sel_resume_triangle(x, y, height, event)
1261
1262 def draw_completion_marker_at_time(self, time, task_no, cpu_no, selected=False):
1263 height = self._get_bar_height() * GraphFormat.COMPLETION_MARKER_FACTOR
1264 x = self._get_time_xpos(time)
1265 y = self._get_item_ypos(cpu_no) + self._get_bar_height() - height
1266
1267 self.canvas.draw_completion_marker(x, y, height, selected)
1268
1269 def add_sel_completion_marker_at_time(self, time, task_no, cpu_no, event):
1270 height = self._get_bar_height() * GraphFormat.COMPLETION_MARKER_FACTOR
1271
1272 x = self._get_time_xpos(time)
1273 y = self._get_item_ypos(cpu_no) + self._get_bar_height() - height
1274
1275 self.canvas.add_sel_completion_marker(x, y, height, event)
1276
1277 def draw_release_arrow_at_time(self, time, task_no, job_no=None, selected=False):
1278 if job_no is None and task_no is not None:
1279 raise ValueError("Must specify a job number along with the task number")
1280
1281 height = self._get_bar_height() * GraphFormat.SMALL_ARROW_FACTOR
1282
1283 x = self._get_time_xpos(time)
1284 y = self.origin[1] - height
1285
1286 self.canvas.draw_release_arrow_small(x, y, height, selected)
1287
1288 if task_no is not None:
1289 y -= GraphFormat.ARROW_LABEL_OFS
1290 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1291 GraphFormat.DEF_FOPTS_ARROW, GraphFormat.DEF_FOPTS_ARROW_SSCRIPT, \
1292 AlignMode.CENTER, AlignMode.BOTTOM)
1293
1294 def add_sel_release_arrow_at_time(self, time, task_no, event):
1295 height = self._get_bar_height() * GraphFormat.SMALL_ARROW_FACTOR
1296
1297 x = self._get_time_xpos(time)
1298 y = self.origin[1] - height
1299
1300 self.canvas.add_sel_release_arrow_small(x, y, height, event)
1301
1302 def draw_deadline_arrow_at_time(self, time, task_no, job_no=None, selected=False):
1303 if job_no is None and task_no is not None:
1304 raise ValueError("Must specify a job number along with the task number")
1305
1306 height = self._get_bar_height() * GraphFormat.SMALL_ARROW_FACTOR
1307
1308 x = self._get_time_xpos(time)
1309 y = self.origin[1] - height
1310
1311 self.canvas.draw_deadline_arrow_small(x, y, height, selected)
1312
1313 if task_no is not None:
1314 y -= GraphFormat.ARROW_LABEL_OFS
1315 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1316 GraphFormat.DEF_FOPTS_ARROW, GraphFormat.DEF_FOPTS_ARROW_SSCRIPT, \
1317 AlignMode.CENTER, AlignMode.BOTTOM)
1318
1319 def add_sel_deadline_arrow_at_time(self, time, task_no, event):
1320 height = self._get_bar_height() * GraphFormat.SMALL_ARROW_FACTOR
1321
1322 x = self._get_time_xpos(time)
1323 y = self.origin[1] - height
1324
1325 self.canvas.add_sel_deadline_arrow_small(x, y, height, event)
1326
1327 def draw_bar_at_time(self, start_time, end_time, task_no, cpu_no, job_no=None, clip_side=None, selected=False):
1328 if start_time > end_time:
1329 raise ValueError("Litmus is not a time machine")
1330
1331 x = self._get_time_xpos(start_time)
1332 y = self._get_item_ypos(cpu_no)
1333 width = self._get_bar_width(start_time, end_time)
1334 height = self._get_bar_height()
1335
1336 self.canvas.draw_bar(x, y, width, height, task_no, clip_side, selected)
1337
1338 # if a job number is specified, we want to draw a superscript and subscript for the task and job number, respectively
1339 if job_no is not None:
1340 x += GraphFormat.BAR_LABEL_OFS
1341 y += self.attrs.y_item_size * GraphFormat.BAR_SIZE_FACTOR / 2.0
1342 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1343 GraphFormat.DEF_FOPTS_BAR, GraphFormat.DEF_FOPTS_BAR_SSCRIPT, \
1344 AlignMode.LEFT, AlignMode.CENTER)
1345
1346 def add_sel_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
1347 x = self._get_time_xpos(start_time)
1348 y = self._get_item_ypos(cpu_no)
1349 width = self._get_bar_width(start_time, end_time)
1350 height = self._get_bar_height()
1351
1352 self.canvas.add_sel_bar(x, y, width, height, event)
1353
1354 def draw_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, job_no=None, clip_side=None, selected=False):
1355 if start_time > end_time:
1356 raise ValueError("Litmus is not a time machine")
1357
1358 x = self._get_time_xpos(start_time)
1359 y = self._get_item_ypos(len(self.y_item_list))
1360 width = self._get_bar_width(start_time, end_time)
1361 height = self._get_mini_bar_height()
1362
1363 self.canvas.draw_mini_bar(x, y, width, height, task_no, clip_side, selected)
1364
1365 if job_no is not None:
1366 x += GraphFormat.MINI_BAR_LABEL_OFS
1367 y += self.attrs.y_item_size * GraphFormat.MINI_BAR_SIZE_FACTOR / 2.0
1368 self.canvas.draw_label_with_sscripts('T', str(task_no), str(job_no), x, y, \
1369 GraphFormat.DEF_FOPTS_MINI_BAR, GraphFormat.DEF_FOPTS_MINI_BAR_SSCRIPT, AlignMode.LEFT, AlignMode.CENTER)
1370
1371 def add_sel_mini_bar_at_time(self, start_time, end_time, task_no, cpu_no, event):
1372 x = self._get_time_xpos(start_time)
1373 y = self._get_item_ypos(len(self.y_item_list))
1374 width = self._get_bar_width(start_time, end_time)
1375 height = self._get_mini_bar_height()
1376
1377 self.canvas.add_sel_mini_bar(x, y, width, height, event)