summaryrefslogtreecommitdiffstats
path: root/README
blob: 4c1b772c37b0873cc225a59b9dc6676798471189 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
See the LITMUS Wiki page for a general explanation of this tool.

unit_trace consists of two modules and a core. The ``core'' is basically
a bunch of code, implemented as Python iterators, which converts the
raw trace data into a sequence of record objects, implemented in
Python. The modules are:

1) A simple module that outputs the contents of each record to
stdout. This module, along with most of the core, can be found in the
reader/ directory. There is a sample script -- look at
sample_script.py in the reader/ directory (it's pretty
self-explanatory). Note that Mac is the one who coded most of the
this, though I can probably try to answer any questions about it since
I've had to go in there from time to time.

2) The visualizer. Now, the GUI as it stands is very basic -- it's
basically just a shell for the core visualizer component. How to open
a file is obvious -- but note that you can open several files at a
time (since more often than not a trace consists of more than one
file, typically one for each CPU).

Most of the code for this is in the viz/ directory, but to run it, the
file you want to execute is visualizer.py (in the main directory).

A few notes on how to use the GUI:

-- How to scroll is pretty obvious, though I still need to implement
   keypresses (very trivial, but when making a GUI component from
   scratch it always seems like there are a million little things that
   you need to do :)

-- You can view either by task or by CPU; click the tabs at the top.

-- Mousing over the items (not the axes, though, since those are
   pretty self-explanatory) gives you information about the item that
   you moused over, displayed at the bottom.

-- You can select items. You can click them individually, one at a
   time, or you can drag or ctrl-click to select multiple.

-- What you have selected is independent of what mode (task or CPU)
   you are operating in. So if you are curious, say, when a certain
   job is running compared to other jobs on the same CPU, you can
   click a job in task mode and then switch to CPU mode, and it will
   remain selected.

-- Right-click to get a menu of all the items you have selected (in
   the future this menu will be clickable, so that you can get the
   information about an item in its own window).

-- It is a bit laggy when lots of stuff is on the screen at once. This
   should be fairly easy to optimize, if I have correctly identified
   the problem, but it's not a huge issue (it's not _that_ slow).

But wait, there's more:

-- As of now unit-trace has no way to determine the algorithm that was
   used on the trace you're loading. This is important since certain
   sections of code work only with G-EDF in particular. The point of
   having this special code is either to filter out bogus data or to
   generate extra information about the schedule (e.g. priority
   inversions). Of course, you can leave these extra steps out and
   it will still work, but you might get extra ``bogus'' information
   generated by the tracer or you might not get all the information
   you want.

-- To add or remove these extra steps, take a look at visualizer.py
   and sample_script.py. You will see some code like this:

   stream = reader.trace_reader.trace_reader(file_list)
   #stream = reader.sanitizer.sanitizer(stream)
   #stream = reader.gedf_test.gedf_test(stream)
   
   Uncommenting those lines will run the extra steps in the pipeline.
   The sanitizer filters out some bogus data (stuff like ``pid 0''),
   but so far it's only been coded for a select number of traces.
   gedf_test generates extra information about G-EDF schedules
   (the traces that are named st-g?-?.bin). If you try to run
   gedf_test on anything else, it will most likely fail.

-- What traces are you going to use? Well, you will probably want to
   use your own, but there are some samples you can try in the traces/
   directory (a couple of them do give error messages, however).

How to install:

You should type

git clone -b wip-gary ssh://cvs.cs.unc.edu/cvs/proj/litmus/repo/unit-trace.git

to check out the repository. Note that you shouldn't check out the
master branch, as it's pretty outdated. It's all Python so far, so no
compiling or anything like that is necessary.

Requirements:

You're going to need Python 2.5 to run this. You'll also need to
install the pycairo and pygtk libraries. If anyone has questions about
how to do this or what these are, ask me.

Miscellanies:

Of course, let me know if you find any bugs (I'm sure there are
plenty, though, since this is fairly alpha software), if you're
unable to run it, or if you have any questions.