diff options
author | Bharata B Rao <bharata@linux.vnet.ibm.com> | 2011-07-21 12:43:43 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-08-14 06:03:58 -0400 |
commit | 88ebc08ea9f721d1345d5414288a308ea42ac458 (patch) | |
tree | 51cc33571e3c4c570bc9656020baaf1065aa0314 /Documentation | |
parent | d8b4986d3dbc4fabc2054d63f1d31d6ed2fb1ca8 (diff) |
sched: Add documentation for bandwidth control
Basic description of usage and effect for CFS Bandwidth Control.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110721184758.498036116@google.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/scheduler/sched-bwc.txt | 122 |
1 files changed, 122 insertions, 0 deletions
diff --git a/Documentation/scheduler/sched-bwc.txt b/Documentation/scheduler/sched-bwc.txt new file mode 100644 index 000000000000..f6b1873f68ab --- /dev/null +++ b/Documentation/scheduler/sched-bwc.txt | |||
@@ -0,0 +1,122 @@ | |||
1 | CFS Bandwidth Control | ||
2 | ===================== | ||
3 | |||
4 | [ This document only discusses CPU bandwidth control for SCHED_NORMAL. | ||
5 | The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.txt ] | ||
6 | |||
7 | CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the | ||
8 | specification of the maximum CPU bandwidth available to a group or hierarchy. | ||
9 | |||
10 | The bandwidth allowed for a group is specified using a quota and period. Within | ||
11 | each given "period" (microseconds), a group is allowed to consume only up to | ||
12 | "quota" microseconds of CPU time. When the CPU bandwidth consumption of a | ||
13 | group exceeds this limit (for that period), the tasks belonging to its | ||
14 | hierarchy will be throttled and are not allowed to run again until the next | ||
15 | period. | ||
16 | |||
17 | A group's unused runtime is globally tracked, being refreshed with quota units | ||
18 | above at each period boundary. As threads consume this bandwidth it is | ||
19 | transferred to cpu-local "silos" on a demand basis. The amount transferred | ||
20 | within each of these updates is tunable and described as the "slice". | ||
21 | |||
22 | Management | ||
23 | ---------- | ||
24 | Quota and period are managed within the cpu subsystem via cgroupfs. | ||
25 | |||
26 | cpu.cfs_quota_us: the total available run-time within a period (in microseconds) | ||
27 | cpu.cfs_period_us: the length of a period (in microseconds) | ||
28 | cpu.stat: exports throttling statistics [explained further below] | ||
29 | |||
30 | The default values are: | ||
31 | cpu.cfs_period_us=100ms | ||
32 | cpu.cfs_quota=-1 | ||
33 | |||
34 | A value of -1 for cpu.cfs_quota_us indicates that the group does not have any | ||
35 | bandwidth restriction in place, such a group is described as an unconstrained | ||
36 | bandwidth group. This represents the traditional work-conserving behavior for | ||
37 | CFS. | ||
38 | |||
39 | Writing any (valid) positive value(s) will enact the specified bandwidth limit. | ||
40 | The minimum quota allowed for the quota or period is 1ms. There is also an | ||
41 | upper bound on the period length of 1s. Additional restrictions exist when | ||
42 | bandwidth limits are used in a hierarchical fashion, these are explained in | ||
43 | more detail below. | ||
44 | |||
45 | Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit | ||
46 | and return the group to an unconstrained state once more. | ||
47 | |||
48 | Any updates to a group's bandwidth specification will result in it becoming | ||
49 | unthrottled if it is in a constrained state. | ||
50 | |||
51 | System wide settings | ||
52 | -------------------- | ||
53 | For efficiency run-time is transferred between the global pool and CPU local | ||
54 | "silos" in a batch fashion. This greatly reduces global accounting pressure | ||
55 | on large systems. The amount transferred each time such an update is required | ||
56 | is described as the "slice". | ||
57 | |||
58 | This is tunable via procfs: | ||
59 | /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms) | ||
60 | |||
61 | Larger slice values will reduce transfer overheads, while smaller values allow | ||
62 | for more fine-grained consumption. | ||
63 | |||
64 | Statistics | ||
65 | ---------- | ||
66 | A group's bandwidth statistics are exported via 3 fields in cpu.stat. | ||
67 | |||
68 | cpu.stat: | ||
69 | - nr_periods: Number of enforcement intervals that have elapsed. | ||
70 | - nr_throttled: Number of times the group has been throttled/limited. | ||
71 | - throttled_time: The total time duration (in nanoseconds) for which entities | ||
72 | of the group have been throttled. | ||
73 | |||
74 | This interface is read-only. | ||
75 | |||
76 | Hierarchical considerations | ||
77 | --------------------------- | ||
78 | The interface enforces that an individual entity's bandwidth is always | ||
79 | attainable, that is: max(c_i) <= C. However, over-subscription in the | ||
80 | aggregate case is explicitly allowed to enable work-conserving semantics | ||
81 | within a hierarchy. | ||
82 | e.g. \Sum (c_i) may exceed C | ||
83 | [ Where C is the parent's bandwidth, and c_i its children ] | ||
84 | |||
85 | |||
86 | There are two ways in which a group may become throttled: | ||
87 | a. it fully consumes its own quota within a period | ||
88 | b. a parent's quota is fully consumed within its period | ||
89 | |||
90 | In case b) above, even though the child may have runtime remaining it will not | ||
91 | be allowed to until the parent's runtime is refreshed. | ||
92 | |||
93 | Examples | ||
94 | -------- | ||
95 | 1. Limit a group to 1 CPU worth of runtime. | ||
96 | |||
97 | If period is 250ms and quota is also 250ms, the group will get | ||
98 | 1 CPU worth of runtime every 250ms. | ||
99 | |||
100 | # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */ | ||
101 | # echo 250000 > cpu.cfs_period_us /* period = 250ms */ | ||
102 | |||
103 | 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine. | ||
104 | |||
105 | With 500ms period and 1000ms quota, the group can get 2 CPUs worth of | ||
106 | runtime every 500ms. | ||
107 | |||
108 | # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */ | ||
109 | # echo 500000 > cpu.cfs_period_us /* period = 500ms */ | ||
110 | |||
111 | The larger period here allows for increased burst capacity. | ||
112 | |||
113 | 3. Limit a group to 20% of 1 CPU. | ||
114 | |||
115 | With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU. | ||
116 | |||
117 | # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ | ||
118 | # echo 50000 > cpu.cfs_period_us /* period = 50ms */ | ||
119 | |||
120 | By using a small period here we are ensuring a consistent latency | ||
121 | response at the expense of burst capacity. | ||
122 | |||