1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
|
menu "LITMUS^RT"
menu "Scheduling"
config PLUGIN_CEDF
bool "Clustered-EDF"
depends on X86 && SYSFS
default y
help
Include the Clustered EDF (C-EDF) plugin in the kernel.
This is appropriate for large platforms with shared caches.
On smaller platforms (e.g., ARM PB11MPCore), using C-EDF
makes little sense since there aren't any shared caches.
config PLUGIN_PFAIR
bool "PFAIR"
depends on HIGH_RES_TIMERS && !NO_HZ
default y
help
Include the PFAIR plugin (i.e., the PD^2 scheduler) in the kernel.
The PFAIR plugin requires high resolution timers (for staggered quanta)
and does not support NO_HZ (quanta could be missed when the system is idle).
If unsure, say Yes.
config RELEASE_MASTER
bool "Release-master Support"
depends on ARCH_HAS_SEND_PULL_TIMERS
default n
help
Allow one processor to act as a dedicated interrupt processor
that services all timer interrupts, but that does not schedule
real-time tasks. See RTSS'09 paper for details
(http://www.cs.unc.edu/~anderson/papers.html).
Currently only supported by GSN-EDF.
endmenu
menu "Real-Time Synchronization"
config NP_SECTION
bool "Non-preemptive section support"
default n
help
Allow tasks to become non-preemptable.
Note that plugins still need to explicitly support non-preemptivity.
Currently, only GSN-EDF and PSN-EDF have such support.
This is required to support locking protocols such as the FMLP.
If disabled, all tasks will be considered preemptable at all times.
config LITMUS_LOCKING
bool "Support for real-time locking protocols"
depends on NP_SECTION
default n
help
Enable LITMUS^RT's deterministic multiprocessor real-time
locking protocols.
Say Yes if you want to include locking protocols such as the FMLP and
Baker's SRP.
config LITMUS_AFFINITY_LOCKING
bool "Enable affinity infrastructure in k-exclusion locking protocols."
depends on LITMUS_LOCKING
default n
help
Enable affinity tracking infrastructure in k-exclusion locking protocols.
This only enabled the *infrastructure* not actual affinity algorithms.
If unsure, say No.
config LITMUS_NESTED_LOCKING
bool "Support for nested inheritance in locking protocols"
depends on LITMUS_LOCKING
default n
help
Enable nested priority inheritance.
config LITMUS_DGL_SUPPORT
bool "Support for dynamic group locks"
depends on LITMUS_NESTED_LOCKING
default n
help
Enable dynamic group lock support.
config LITMUS_MAX_DGL_SIZE
int "Maximum size of a dynamic group lock."
depends on LITMUS_DGL_SUPPORT
range 1 128
default "10"
help
Dynamic group lock data structures are allocated on the process
stack when a group is requested. We set a maximum size of
locks in a dynamic group lock to avoid dynamic allocation.
TODO: Batch DGL requests exceeding LITMUS_MAX_DGL_SIZE.
endmenu
menu "Performance Enhancements"
config SCHED_CPU_AFFINITY
bool "Local Migration Affinity"
depends on X86
default y
help
Rescheduled tasks prefer CPUs near to their previously used CPU. This
may improve performance through possible preservation of cache affinity.
Warning: May make bugs harder to find since tasks may migrate less often.
NOTES:
* Feature is not utilized by PFair/PD^2.
Say Yes if unsure.
endmenu
menu "Tracing"
config FEATHER_TRACE
bool "Feather-Trace Infrastructure"
default y
help
Feather-Trace basic tracing infrastructure. Includes device file
driver and instrumentation point support.
There are actually two implementations of Feather-Trace.
1) A slower, but portable, default implementation.
2) Architecture-specific implementations that rewrite kernel .text at runtime.
If enabled, Feather-Trace will be based on 2) if available (currently only for x86).
However, if DEBUG_RODATA=y, then Feather-Trace will choose option 1) in any case
to avoid problems with write-protected .text pages.
Bottom line: to avoid increased overheads, choose DEBUG_RODATA=n.
Note that this option only enables the basic Feather-Trace infrastructure;
you still need to enable SCHED_TASK_TRACE and/or SCHED_OVERHEAD_TRACE to
actually enable any events.
config SCHED_TASK_TRACE
bool "Trace real-time tasks"
depends on FEATHER_TRACE
default y
help
Include support for the sched_trace_XXX() tracing functions. This
allows the collection of real-time task events such as job
completions, job releases, early completions, etc. This results in a
small overhead in the scheduling code. Disable if the overhead is not
acceptable (e.g., benchmarking).
Say Yes for debugging.
Say No for overhead tracing.
config SCHED_TASK_TRACE_SHIFT
int "Buffer size for sched_trace_xxx() events"
depends on SCHED_TASK_TRACE
range 8 15
default 9
help
Select the buffer size of sched_trace_xxx() events as a power of two.
These buffers are statically allocated as per-CPU data. Each event
requires 24 bytes storage plus one additional flag byte. Too large
buffers can cause issues with the per-cpu allocator (and waste
memory). Too small buffers can cause scheduling events to be lost. The
"right" size is workload dependent and depends on the number of tasks,
each task's period, each task's number of suspensions, and how often
the buffer is flushed.
Examples: 12 => 4k events
10 => 1k events
8 => 512 events
config SCHED_OVERHEAD_TRACE
bool "Record timestamps for overhead measurements"
depends on FEATHER_TRACE
default n
help
Export event stream for overhead tracing.
Say Yes for overhead tracing.
config SCHED_DEBUG_TRACE
bool "TRACE() debugging"
default y
help
Include support for sched_trace_log_messageg(), which is used to
implement TRACE(). If disabled, no TRACE() messages will be included
in the kernel, and no overheads due to debugging statements will be
incurred by the scheduler. Disable if the overhead is not acceptable
(e.g. benchmarking).
Say Yes for debugging.
Say No for overhead tracing.
config SCHED_DEBUG_TRACE_SHIFT
int "Buffer size for TRACE() buffer"
depends on SCHED_DEBUG_TRACE
range 14 22
default 18
help
Select the amount of memory needed per for the TRACE() buffer, as a
power of two. The TRACE() buffer is global and statically allocated. If
the buffer is too small, there will be holes in the TRACE() log if the
buffer-flushing task is starved.
The default should be sufficient for most systems. Increase the buffer
size if the log contains holes. Reduce the buffer size when running on
a memory-constrained system.
Examples: 14 => 16KB
18 => 256KB
20 => 1MB
This buffer is exported to usespace using a misc device as
'litmus/log'. On a system with default udev rules, a corresponding
character device node should be created at /dev/litmus/log. The buffer
can be flushed using cat, e.g., 'cat /dev/litmus/log > my_log_file.txt'.
config SCHED_DEBUG_TRACE_CALLER
bool "Include [function@file:line] tag in TRACE() log"
depends on SCHED_DEBUG_TRACE
default n
help
With this option enabled, TRACE() prepends
"[<function name>@<filename>:<line number>]"
to each message in the debug log. Enable this to aid in figuring out
what was called in which order. The downside is that it adds a lot of
clutter.
If unsure, say No.
config PREEMPT_STATE_TRACE
bool "Trace preemption state machine transitions"
depends on SCHED_DEBUG_TRACE
default n
help
With this option enabled, each CPU will log when it transitions
states in the preemption state machine. This state machine is
used to determine how to react to IPIs (avoid races with in-flight IPIs).
Warning: this creates a lot of information in the debug trace. Only
recommended when you are debugging preemption-related races.
If unsure, say No.
endmenu
menu "Interrupt Handling"
choice
prompt "Scheduling of interrupt bottom-halves in Litmus."
default LITMUS_SOFTIRQD_NONE
depends on LITMUS_LOCKING && !LITMUS_THREAD_ALL_SOFTIRQ
help
Schedule tasklets with known priorities in Litmus.
config LITMUS_SOFTIRQD_NONE
bool "No tasklet scheduling in Litmus."
help
Don't schedule tasklets in Litmus. Default.
config LITMUS_SOFTIRQD
bool "Spawn klitirqd interrupt handling threads."
help
Create klitirqd interrupt handling threads. Work must be
specifically dispatched to these workers. (Softirqs for
Litmus tasks are not magically redirected to klitirqd.)
G-EDF/RM, C-EDF/RM ONLY for now!
config LITMUS_PAI_SOFTIRQD
bool "Defer tasklets to context switch points."
help
Only execute scheduled tasklet bottom halves at
scheduling points. Trades context switch overhead
at the cost of non-preemptive durations of bottom half
processing.
G-EDF/RM, C-EDF/RM ONLY for now!
endchoice
config NR_LITMUS_SOFTIRQD
int "Number of klitirqd."
depends on LITMUS_SOFTIRQD
range 1 4096
default "1"
help
Should be <= to the number of CPUs in your system.
config LITMUS_NVIDIA
bool "Litmus handling of NVIDIA interrupts."
default n
help
Direct tasklets from NVIDIA devices to Litmus's klitirqd
or PAI interrupt handling routines.
If unsure, say No.
config LITMUS_AFFINITY_AWARE_GPU_ASSINGMENT
bool "Enable affinity-aware heuristics to improve GPU assignment."
depends on LITMUS_NVIDIA && LITMUS_AFFINITY_LOCKING
default n
help
Enable several heuristics to improve the assignment
of GPUs to real-time tasks to reduce the overheads
of memory migrations.
If unsure, say No.
config NV_DEVICE_NUM
int "Number of NVIDIA GPUs."
depends on LITMUS_SOFTIRQD || LITMUS_PAI_SOFTIRQD
range 1 4096
default "1"
help
Should be (<= to the number of CPUs) and
(<= to the number of GPUs) in your system.
config NV_MAX_SIMULT_USERS
int "Maximum number of threads sharing a GPU simultanously"
depends on LITMUS_SOFTIRQD || LITMUS_PAI_SOFTIRQD
range 1 3
default "2"
help
Should be equal to the #copy_engines + #execution_engines
of the GPUs in your system.
Scientific/Professional GPUs = 3 (ex. M2070, Quadro 6000?)
Consumer Fermi/Kepler GPUs = 2 (GTX-4xx thru -6xx)
Older = 1 (ex. GTX-2xx)
choice
prompt "CUDA/Driver Version Support"
default CUDA_4_0
depends on LITMUS_NVIDIA
help
Select the version of CUDA/driver to support.
config CUDA_4_0
bool "CUDA 4.0"
depends on LITMUS_NVIDIA
help
Support CUDA 4.0 RC2 (dev. driver version: x86_64-270.40)
config CUDA_3_2
bool "CUDA 3.2"
depends on LITMUS_NVIDIA
help
Support CUDA 3.2 (dev. driver version: x86_64-260.24)
endchoice
endmenu
endmenu
|