diff options
author | Luca Abeni <luca.abeni@unitn.it> | 2014-09-09 05:57:14 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-09-16 04:23:01 -0400 |
commit | b56bfc6cd13c25264f614320de9183a5dbcab6ca (patch) | |
tree | 03e37adcb3161130c50ec27b06b363d85740f2d3 | |
parent | 0d9ba8b03cfaed2696de42fe15ed410ba2ec7dbe (diff) |
Documentation/scheduler/sched-deadline.txt: Improve and clarify AC bits
Admission control is of key importance for SCHED_DEADLINE, since
it guarantees system schedulability (or tells us something about
the degree of guarantees we can provide to the user).
This patch improves and clarifies bits and pieces regarding AC,
both for UP and SMP systems.
Signed-off-by: Luca Abeni <luca.abeni@unitn.it>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Reviewed-by: Henrik Austad <henrik@austad.us>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dario Faggioli <raistlin@linux.it>
Cc: Juri Lelli <juri.lelli@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1410256636-26171-4-git-send-email-juri.lelli@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | Documentation/scheduler/sched-deadline.txt | 99 |
1 files changed, 82 insertions, 17 deletions
diff --git a/Documentation/scheduler/sched-deadline.txt b/Documentation/scheduler/sched-deadline.txt index f75d8327b914..0ce5e2c9ab7c 100644 --- a/Documentation/scheduler/sched-deadline.txt +++ b/Documentation/scheduler/sched-deadline.txt | |||
@@ -38,16 +38,17 @@ CONTENTS | |||
38 | ================== | 38 | ================== |
39 | 39 | ||
40 | SCHED_DEADLINE uses three parameters, named "runtime", "period", and | 40 | SCHED_DEADLINE uses three parameters, named "runtime", "period", and |
41 | "deadline" to schedule tasks. A SCHED_DEADLINE task is guaranteed to receive | 41 | "deadline", to schedule tasks. A SCHED_DEADLINE task should receive |
42 | "runtime" microseconds of execution time every "period" microseconds, and | 42 | "runtime" microseconds of execution time every "period" microseconds, and |
43 | these "runtime" microseconds are available within "deadline" microseconds | 43 | these "runtime" microseconds are available within "deadline" microseconds |
44 | from the beginning of the period. In order to implement this behaviour, | 44 | from the beginning of the period. In order to implement this behaviour, |
45 | every time the task wakes up, the scheduler computes a "scheduling deadline" | 45 | every time the task wakes up, the scheduler computes a "scheduling deadline" |
46 | consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then | 46 | consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then |
47 | scheduled using EDF[1] on these scheduling deadlines (the task with the | 47 | scheduled using EDF[1] on these scheduling deadlines (the task with the |
48 | earliest scheduling deadline is selected for execution). Notice that this | 48 | earliest scheduling deadline is selected for execution). Notice that the |
49 | guaranteed is respected if a proper "admission control" strategy (see Section | 49 | task actually receives "runtime" time units within "deadline" if a proper |
50 | "4. Bandwidth management") is used. | 50 | "admission control" strategy (see Section "4. Bandwidth management") is used |
51 | (clearly, if the system is overloaded this guarantee cannot be respected). | ||
51 | 52 | ||
52 | Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to tasks so | 53 | Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to tasks so |
53 | that each task runs for at most its runtime every period, avoiding any | 54 | that each task runs for at most its runtime every period, avoiding any |
@@ -134,6 +135,50 @@ CONTENTS | |||
134 | A real-time task can be periodic with period P if r_{j+1} = r_j + P, or | 135 | A real-time task can be periodic with period P if r_{j+1} = r_j + P, or |
135 | sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally, | 136 | sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally, |
136 | d_j = r_j + D, where D is the task's relative deadline. | 137 | d_j = r_j + D, where D is the task's relative deadline. |
138 | The utilisation of a real-time task is defined as the ratio between its | ||
139 | WCET and its period (or minimum inter-arrival time), and represents | ||
140 | the fraction of CPU time needed to execute the task. | ||
141 | |||
142 | If the total utilisation sum_i(WCET_i/P_i) is larger than M (with M equal | ||
143 | to the number of CPUs), then the scheduler is unable to respect all the | ||
144 | deadlines. | ||
145 | Note that total utilisation is defined as the sum of the utilisations | ||
146 | WCET_i/P_i over all the real-time tasks in the system. When considering | ||
147 | multiple real-time tasks, the parameters of the i-th task are indicated | ||
148 | with the "_i" suffix. | ||
149 | Moreover, if the total utilisation is larger than M, then we risk starving | ||
150 | non- real-time tasks by real-time tasks. | ||
151 | If, instead, the total utilisation is smaller than M, then non real-time | ||
152 | tasks will not be starved and the system might be able to respect all the | ||
153 | deadlines. | ||
154 | As a matter of fact, in this case it is possible to provide an upper bound | ||
155 | for tardiness (defined as the maximum between 0 and the difference | ||
156 | between the finishing time of a job and its absolute deadline). | ||
157 | More precisely, it can be proven that using a global EDF scheduler the | ||
158 | maximum tardiness of each task is smaller or equal than | ||
159 | ((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max | ||
160 | where WCET_max = max_i{WCET_i} is the maximum WCET, WCET_min=min_i{WCET_i} | ||
161 | is the minimum WCET, and U_max = max_i{WCET_i/P_i} is the maximum utilisation. | ||
162 | |||
163 | If M=1 (uniprocessor system), or in case of partitioned scheduling (each | ||
164 | real-time task is statically assigned to one and only one CPU), it is | ||
165 | possible to formally check if all the deadlines are respected. | ||
166 | If D_i = P_i for all tasks, then EDF is able to respect all the deadlines | ||
167 | of all the tasks executing on a CPU if and only if the total utilisation | ||
168 | of the tasks running on such a CPU is smaller or equal than 1. | ||
169 | If D_i != P_i for some task, then it is possible to define the density of | ||
170 | a task as C_i/min{D_i,T_i}, and EDF is able to respect all the deadlines | ||
171 | of all the tasks running on a CPU if the sum sum_i C_i/min{D_i,T_i} of the | ||
172 | densities of the tasks running on such a CPU is smaller or equal than 1 | ||
173 | (notice that this condition is only sufficient, and not necessary). | ||
174 | |||
175 | On multiprocessor systems with global EDF scheduling (non partitioned | ||
176 | systems), a sufficient test for schedulability can not be based on the | ||
177 | utilisations (it can be shown that task sets with utilisations slightly | ||
178 | larger than 1 can miss deadlines regardless of the number of CPUs M). | ||
179 | However, as previously stated, enforcing that the total utilisation is smaller | ||
180 | than M is enough to guarantee that non real-time tasks are not starved and | ||
181 | that the tardiness of real-time tasks has an upper bound. | ||
137 | 182 | ||
138 | SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that | 183 | SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that |
139 | the jobs' deadlines of a task are respected. In order to do this, a task | 184 | the jobs' deadlines of a task are respected. In order to do this, a task |
@@ -163,14 +208,22 @@ CONTENTS | |||
163 | 4. Bandwidth management | 208 | 4. Bandwidth management |
164 | ======================= | 209 | ======================= |
165 | 210 | ||
166 | In order for the -deadline scheduling to be effective and useful, it is | 211 | As previously mentioned, in order for -deadline scheduling to be |
167 | important to have some method to keep the allocation of the available CPU | 212 | effective and useful (that is, to be able to provide "runtime" time units |
168 | bandwidth to the tasks under control. This is usually called "admission | 213 | within "deadline"), it is important to have some method to keep the allocation |
169 | control" and if it is not performed at all, no guarantee can be given on | 214 | of the available fractions of CPU time to the various tasks under control. |
170 | the actual scheduling of the -deadline tasks. | 215 | This is usually called "admission control" and if it is not performed, then |
171 | 216 | no guarantee can be given on the actual scheduling of the -deadline tasks. | |
172 | The interface used to control the fraction of CPU bandwidth that can be | 217 | |
173 | allocated to -deadline tasks is similar to the one already used for -rt | 218 | As already stated in Section 3, a necessary condition to be respected to |
219 | correctly schedule a set of real-time tasks is that the total utilisation | ||
220 | is smaller than M. When talking about -deadline tasks, this requires that | ||
221 | the sum of the ratio between runtime and period for all tasks is smaller | ||
222 | than M. Notice that the ratio runtime/period is equivalent to the utilisation | ||
223 | of a "traditional" real-time task, and is also often referred to as | ||
224 | "bandwidth". | ||
225 | The interface used to control the CPU bandwidth that can be allocated | ||
226 | to -deadline tasks is similar to the one already used for -rt | ||
174 | tasks with real-time group scheduling (a.k.a. RT-throttling - see | 227 | tasks with real-time group scheduling (a.k.a. RT-throttling - see |
175 | Documentation/scheduler/sched-rt-group.txt), and is based on readable/ | 228 | Documentation/scheduler/sched-rt-group.txt), and is based on readable/ |
176 | writable control files located in procfs (for system wide settings). | 229 | writable control files located in procfs (for system wide settings). |
@@ -182,9 +235,13 @@ CONTENTS | |||
182 | A main difference between deadline bandwidth management and RT-throttling | 235 | A main difference between deadline bandwidth management and RT-throttling |
183 | is that -deadline tasks have bandwidth on their own (while -rt ones don't!), | 236 | is that -deadline tasks have bandwidth on their own (while -rt ones don't!), |
184 | and thus we don't need a higher level throttling mechanism to enforce the | 237 | and thus we don't need a higher level throttling mechanism to enforce the |
185 | desired bandwidth. Therefore, using this simple interface we can put a cap | 238 | desired bandwidth. In other words, this means that interface parameters are |
186 | on total utilization of -deadline tasks (i.e., \Sum (runtime_i / period_i) < | 239 | only used at admission control time (i.e., when the user calls |
187 | global_dl_utilization_cap). | 240 | sched_setattr()). Scheduling is then performed considering actual tasks' |
241 | parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks | ||
242 | respecting their needs in terms of granularity. Therefore, using this simple | ||
243 | interface we can put a cap on total utilization of -deadline tasks (i.e., | ||
244 | \Sum (runtime_i / period_i) < global_dl_utilization_cap). | ||
188 | 245 | ||
189 | 4.1 System wide settings | 246 | 4.1 System wide settings |
190 | ------------------------ | 247 | ------------------------ |
@@ -232,8 +289,16 @@ CONTENTS | |||
232 | 950000. With rt_period equal to 1000000, by default, it means that -deadline | 289 | 950000. With rt_period equal to 1000000, by default, it means that -deadline |
233 | tasks can use at most 95%, multiplied by the number of CPUs that compose the | 290 | tasks can use at most 95%, multiplied by the number of CPUs that compose the |
234 | root_domain, for each root_domain. | 291 | root_domain, for each root_domain. |
235 | 292 | This means that non -deadline tasks will receive at least 5% of the CPU time, | |
236 | A -deadline task cannot fork. | 293 | and that -deadline tasks will receive their runtime with a guaranteed |
294 | worst-case delay respect to the "deadline" parameter. If "deadline" = "period" | ||
295 | and the cpuset mechanism is used to implement partitioned scheduling (see | ||
296 | Section 5), then this simple setting of the bandwidth management is able to | ||
297 | deterministically guarantee that -deadline tasks will receive their runtime | ||
298 | in a period. | ||
299 | |||
300 | Finally, notice that in order not to jeopardize the admission control a | ||
301 | -deadline task cannot fork. | ||
237 | 302 | ||
238 | 5. Tasks CPU affinity | 303 | 5. Tasks CPU affinity |
239 | ===================== | 304 | ===================== |