aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorXuewei Zhang <xueweiz@google.com>2019-10-03 20:12:43 -0400
committerIngo Molnar <mingo@kernel.org>2019-10-09 06:38:02 -0400
commit4929a4e6faa0f13289a67cae98139e727f0d4a97 (patch)
tree21ede612566dcb62c8fa74ca41e6e91b157630bc /kernel
parent73956fc07dd7b25d4a33ab3fdd6247c60d0b237c (diff)
sched/fair: Scale bandwidth quota and period without losing quota/period ratio precision
The quota/period ratio is used to ensure a child task group won't get more bandwidth than the parent task group, and is calculated as: normalized_cfs_quota() = [(quota_us << 20) / period_us] If the quota/period ratio was changed during this scaling due to precision loss, it will cause inconsistency between parent and child task groups. See below example: A userspace container manager (kubelet) does three operations: 1) Create a parent cgroup, set quota to 1,000us and period to 10,000us. 2) Create a few children cgroups. 3) Set quota to 1,000us and period to 10,000us on a child cgroup. These operations are expected to succeed. However, if the scaling of 147/128 happens before step 3, quota and period of the parent cgroup will be changed: new_quota: 1148437ns, 1148us new_period: 11484375ns, 11484us And when step 3 comes in, the ratio of the child cgroup will be 104857, which will be larger than the parent cgroup ratio (104821), and will fail. Scaling them by a factor of 2 will fix the problem. Tested-by: Phil Auld <pauld@redhat.com> Signed-off-by: Xuewei Zhang <xueweiz@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Phil Auld <pauld@redhat.com> Cc: Anton Blanchard <anton@ozlabs.org> Cc: Ben Segall <bsegall@google.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Guittot <vincent.guittot@linaro.org> Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup") Link: https://lkml.kernel.org/r/20191004001243.140897-1-xueweiz@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/fair.c36
1 files changed, 22 insertions, 14 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 83ab35e2374f..682a754ea3e1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4926,20 +4926,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
4926 if (++count > 3) { 4926 if (++count > 3) {
4927 u64 new, old = ktime_to_ns(cfs_b->period); 4927 u64 new, old = ktime_to_ns(cfs_b->period);
4928 4928
4929 new = (old * 147) / 128; /* ~115% */ 4929 /*
4930 new = min(new, max_cfs_quota_period); 4930 * Grow period by a factor of 2 to avoid losing precision.
4931 4931 * Precision loss in the quota/period ratio can cause __cfs_schedulable
4932 cfs_b->period = ns_to_ktime(new); 4932 * to fail.
4933 4933 */
4934 /* since max is 1s, this is limited to 1e9^2, which fits in u64 */ 4934 new = old * 2;
4935 cfs_b->quota *= new; 4935 if (new < max_cfs_quota_period) {
4936 cfs_b->quota = div64_u64(cfs_b->quota, old); 4936 cfs_b->period = ns_to_ktime(new);
4937 4937 cfs_b->quota *= 2;
4938 pr_warn_ratelimited( 4938
4939 "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n", 4939 pr_warn_ratelimited(
4940 smp_processor_id(), 4940 "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n",
4941 div_u64(new, NSEC_PER_USEC), 4941 smp_processor_id(),
4942 div_u64(cfs_b->quota, NSEC_PER_USEC)); 4942 div_u64(new, NSEC_PER_USEC),
4943 div_u64(cfs_b->quota, NSEC_PER_USEC));
4944 } else {
4945 pr_warn_ratelimited(
4946 "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n",
4947 smp_processor_id(),
4948 div_u64(old, NSEC_PER_USEC),
4949 div_u64(cfs_b->quota, NSEC_PER_USEC));
4950 }
4943 4951
4944 /* reset count so we don't come right back in here */ 4952 /* reset count so we don't come right back in here */
4945 count = 0; 4953 count = 0;