aboutsummaryrefslogtreecommitdiffstats
path: root/litmus/Kconfig
diff options
context:
space:
mode:
Diffstat (limited to 'litmus/Kconfig')
-rw-r--r--litmus/Kconfig303
1 files changed, 303 insertions, 0 deletions
diff --git a/litmus/Kconfig b/litmus/Kconfig
new file mode 100644
index 00000000000..795fbe1a769
--- /dev/null
+++ b/litmus/Kconfig
@@ -0,0 +1,303 @@
1menu "LITMUS^RT"
2
3menu "Scheduling"
4
5config PLUGIN_CEDF
6 bool "Clustered-EDF"
7 depends on X86 && SYSFS
8 default y
9 help
10 Include the Clustered EDF (C-EDF) plugin in the kernel.
11 This is appropriate for large platforms with shared caches.
12 On smaller platforms (e.g., ARM PB11MPCore), using C-EDF
13 makes little sense since there aren't any shared caches.
14
15config PLUGIN_PFAIR
16 bool "PFAIR"
17 depends on HIGH_RES_TIMERS && !NO_HZ
18 default y
19 help
20 Include the PFAIR plugin (i.e., the PD^2 scheduler) in the kernel.
21 The PFAIR plugin requires high resolution timers (for staggered quanta)
22 and does not support NO_HZ (quanta could be missed when the system is idle).
23
24 If unsure, say Yes.
25
26config RELEASE_MASTER
27 bool "Release-master Support"
28 depends on ARCH_HAS_SEND_PULL_TIMERS
29 default n
30 help
31 Allow one processor to act as a dedicated interrupt processor
32 that services all timer interrupts, but that does not schedule
33 real-time tasks. See RTSS'09 paper for details
34 (http://www.cs.unc.edu/~anderson/papers.html).
35 Currently only supported by GSN-EDF.
36
37endmenu
38
39menu "Real-Time Synchronization"
40
41config NP_SECTION
42 bool "Non-preemptive section support"
43 default n
44 help
45 Allow tasks to become non-preemptable.
46 Note that plugins still need to explicitly support non-preemptivity.
47 Currently, only GSN-EDF and PSN-EDF have such support.
48
49 This is required to support locking protocols such as the FMLP.
50 If disabled, all tasks will be considered preemptable at all times.
51
52config LITMUS_LOCKING
53 bool "Support for real-time locking protocols"
54 depends on NP_SECTION
55 default n
56 help
57 Enable LITMUS^RT's deterministic multiprocessor real-time
58 locking protocols.
59
60 Say Yes if you want to include locking protocols such as the FMLP and
61 Baker's SRP.
62
63endmenu
64
65menu "Performance Enhancements"
66
67config SCHED_CPU_AFFINITY
68 bool "Local Migration Affinity"
69 depends on X86
70 default y
71 help
72 Rescheduled tasks prefer CPUs near to their previously used CPU. This
73 may improve performance through possible preservation of cache affinity.
74
75 Warning: May make bugs harder to find since tasks may migrate less often.
76
77 NOTES:
78 * Feature is not utilized by PFair/PD^2.
79
80 Say Yes if unsure.
81
82config ALLOW_EARLY_RELEASE
83 bool "Allow Early Releasing"
84 default y
85 help
86 Allow tasks to release jobs early (while still maintaining job
87 precedence constraints). Only supported by EDF schedulers. Early
88 releasing must be explicitly requested by real-time tasks via
89 the task_params passed to sys_set_task_rt_param().
90
91 Early releasing can improve job response times while maintaining
92 real-time correctness. However, it can easily peg your CPUs
93 since tasks never suspend to wait for their next job. As such, early
94 releasing is really only useful in the context of implementing
95 bandwidth servers, interrupt handling threads, or short-lived
96 computations.
97
98 Beware that early releasing may affect real-time analysis
99 if using locking protocols or I/O.
100
101 Say Yes if unsure.
102
103choice
104 prompt "EDF Tie-Break Behavior"
105 default EDF_TIE_BREAK_LATENESS_NORM
106 help
107 Allows the configuration of tie-breaking behavior when the deadlines
108 of two EDF-scheduled tasks are equal.
109
110 config EDF_TIE_BREAK_LATENESS
111 bool "Lateness-based Tie Break"
112 help
113 Break ties between two jobs, A and B, based upon the lateness of their
114 prior jobs. The job with the greatest lateness has priority. Note that
115 lateness has a negative value if the prior job finished before its
116 deadline.
117
118 config EDF_TIE_BREAK_LATENESS_NORM
119 bool "Normalized Lateness-based Tie Break"
120 help
121 Break ties between two jobs, A and B, based upon the lateness, normalized
122 by relative deadline, of their prior jobs. The job with the greatest
123 normalized lateness has priority. Note that lateness has a negative value
124 if the prior job finished before its deadline.
125
126 Normalized lateness tie-breaks are likely desireable over non-normalized
127 tie-breaks if the execution times and/or relative deadlines of tasks in a
128 task set vary greatly.
129
130 config EDF_TIE_BREAK_HASH
131 bool "Hash-based Tie Breaks"
132 help
133 Break ties between two jobs, A and B, with equal deadlines by using a
134 uniform hash; i.e.: hash(A.pid, A.job_num) < hash(B.pid, B.job_num). Job
135 A has ~50% of winning a given tie-break.
136
137 config EDF_PID_TIE_BREAK
138 bool "PID-based Tie Breaks"
139 help
140 Break ties based upon OS-assigned thread IDs. Use this option if
141 required by algorithm's real-time analysis or per-task response-time
142 jitter must be minimized.
143
144 NOTES:
145 * This tie-breaking method was default in Litmus 2012.2 and before.
146
147endchoice
148
149endmenu
150
151menu "Tracing"
152
153config FEATHER_TRACE
154 bool "Feather-Trace Infrastructure"
155 default y
156 help
157 Feather-Trace basic tracing infrastructure. Includes device file
158 driver and instrumentation point support.
159
160 There are actually two implementations of Feather-Trace.
161 1) A slower, but portable, default implementation.
162 2) Architecture-specific implementations that rewrite kernel .text at runtime.
163
164 If enabled, Feather-Trace will be based on 2) if available (currently only for x86).
165 However, if DEBUG_RODATA=y, then Feather-Trace will choose option 1) in any case
166 to avoid problems with write-protected .text pages.
167
168 Bottom line: to avoid increased overheads, choose DEBUG_RODATA=n.
169
170 Note that this option only enables the basic Feather-Trace infrastructure;
171 you still need to enable SCHED_TASK_TRACE and/or SCHED_OVERHEAD_TRACE to
172 actually enable any events.
173
174config SCHED_TASK_TRACE
175 bool "Trace real-time tasks"
176 depends on FEATHER_TRACE
177 default y
178 help
179 Include support for the sched_trace_XXX() tracing functions. This
180 allows the collection of real-time task events such as job
181 completions, job releases, early completions, etc. This results in a
182 small overhead in the scheduling code. Disable if the overhead is not
183 acceptable (e.g., benchmarking).
184
185 Say Yes for debugging.
186 Say No for overhead tracing.
187
188config SCHED_TASK_TRACE_SHIFT
189 int "Buffer size for sched_trace_xxx() events"
190 depends on SCHED_TASK_TRACE
191 range 8 13
192 default 9
193 help
194
195 Select the buffer size of sched_trace_xxx() events as a power of two.
196 These buffers are statically allocated as per-CPU data. Each event
197 requires 24 bytes storage plus one additional flag byte. Too large
198 buffers can cause issues with the per-cpu allocator (and waste
199 memory). Too small buffers can cause scheduling events to be lost. The
200 "right" size is workload dependent and depends on the number of tasks,
201 each task's period, each task's number of suspensions, and how often
202 the buffer is flushed.
203
204 Examples: 12 => 4k events
205 10 => 1k events
206 8 => 512 events
207
208config SCHED_LITMUS_TRACEPOINT
209 bool "Enable Event/Tracepoint Tracing for real-time task tracing"
210 depends on TRACEPOINTS
211 default n
212 help
213 Enable kernel-style events (tracepoint) for Litmus. Litmus events
214 trace the same functions as the above sched_trace_XXX(), but can
215 be enabled independently.
216 Litmus tracepoints can be recorded and analyzed together (single
217 time reference) with all other kernel tracing events (e.g.,
218 sched:sched_switch, etc.).
219
220 This also enables a quick way to visualize schedule traces using
221 trace-cmd utility and kernelshark visualizer.
222
223 Say Yes for debugging and visualization purposes.
224 Say No for overhead tracing.
225
226config SCHED_OVERHEAD_TRACE
227 bool "Record timestamps for overhead measurements"
228 depends on FEATHER_TRACE
229 default n
230 help
231 Export event stream for overhead tracing.
232 Say Yes for overhead tracing.
233
234config SCHED_DEBUG_TRACE
235 bool "TRACE() debugging"
236 default y
237 help
238 Include support for sched_trace_log_messageg(), which is used to
239 implement TRACE(). If disabled, no TRACE() messages will be included
240 in the kernel, and no overheads due to debugging statements will be
241 incurred by the scheduler. Disable if the overhead is not acceptable
242 (e.g. benchmarking).
243
244 Say Yes for debugging.
245 Say No for overhead tracing.
246
247config SCHED_DEBUG_TRACE_SHIFT
248 int "Buffer size for TRACE() buffer"
249 depends on SCHED_DEBUG_TRACE
250 range 14 22
251 default 18
252 help
253
254 Select the amount of memory needed per for the TRACE() buffer, as a
255 power of two. The TRACE() buffer is global and statically allocated. If
256 the buffer is too small, there will be holes in the TRACE() log if the
257 buffer-flushing task is starved.
258
259 The default should be sufficient for most systems. Increase the buffer
260 size if the log contains holes. Reduce the buffer size when running on
261 a memory-constrained system.
262
263 Examples: 14 => 16KB
264 18 => 256KB
265 20 => 1MB
266
267 This buffer is exported to usespace using a misc device as
268 'litmus/log'. On a system with default udev rules, a corresponding
269 character device node should be created at /dev/litmus/log. The buffer
270 can be flushed using cat, e.g., 'cat /dev/litmus/log > my_log_file.txt'.
271
272config SCHED_DEBUG_TRACE_CALLER
273 bool "Include [function@file:line] tag in TRACE() log"
274 depends on SCHED_DEBUG_TRACE
275 default n
276 help
277 With this option enabled, TRACE() prepends
278
279 "[<function name>@<filename>:<line number>]"
280
281 to each message in the debug log. Enable this to aid in figuring out
282 what was called in which order. The downside is that it adds a lot of
283 clutter.
284
285 If unsure, say No.
286
287config PREEMPT_STATE_TRACE
288 bool "Trace preemption state machine transitions"
289 depends on SCHED_DEBUG_TRACE && DEBUG_KERNEL
290 default n
291 help
292 With this option enabled, each CPU will log when it transitions
293 states in the preemption state machine. This state machine is
294 used to determine how to react to IPIs (avoid races with in-flight IPIs).
295
296 Warning: this creates a lot of information in the debug trace. Only
297 recommended when you are debugging preemption-related races.
298
299 If unsure, say No.
300
301endmenu
302
303endmenu