diff options
author | Juri Lelli <juri.lelli@arm.com> | 2014-09-19 05:22:40 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-10-28 05:47:58 -0400 |
commit | 7f51412a415d87ea8598d14722fb31e4f5701257 (patch) | |
tree | 1b3f90cb539185177143a1bf37e5f4f8d86b64bb /kernel/cpuset.c | |
parent | d9aade7ae1d283097a3f626790e7c325a5c69007 (diff) |
sched/deadline: Fix bandwidth check/update when migrating tasks between exclusive cpusets
Exclusive cpusets are the only way users can restrict SCHED_DEADLINE tasks
affinity (performing what is commonly called clustered scheduling).
Unfortunately, such thing is currently broken for two reasons:
- No check is performed when the user tries to attach a task to
an exlusive cpuset (recall that exclusive cpusets have an
associated maximum allowed bandwidth).
- Bandwidths of source and destination cpusets are not correctly
updated after a task is migrated between them.
This patch fixes both things at once, as they are opposite faces
of the same coin.
The check is performed in cpuset_can_attach(), as there aren't any
points of failure after that function. The updated is split in two
halves. We first reserve bandwidth in the destination cpuset, after
we pass the check in cpuset_can_attach(). And we then release
bandwidth from the source cpuset when the task's affinity is
actually changed. Even if there can be time windows when sched_setattr()
may erroneously fail in the source cpuset, we are fine with it, as
we can't perfom an atomic update of both cpusets at once.
Reported-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Reported-by: Vincent Legout <vincent@legout.info>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dario Faggioli <raistlin@linux.it>
Cc: Michael Trimarchi <michael@amarulasolutions.com>
Cc: Fabio Checconi <fchecconi@gmail.com>
Cc: michael@amarulasolutions.com
Cc: luca.abeni@unitn.it
Cc: Li Zefan <lizefan@huawei.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: cgroups@vger.kernel.org
Link: http://lkml.kernel.org/r/1411118561-26323-3-git-send-email-juri.lelli@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/cpuset.c')
-rw-r--r-- | kernel/cpuset.c | 13 |
1 files changed, 2 insertions, 11 deletions
diff --git a/kernel/cpuset.c b/kernel/cpuset.c index 1f107c74087b..7af8577fc8f8 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c | |||
@@ -1429,17 +1429,8 @@ static int cpuset_can_attach(struct cgroup_subsys_state *css, | |||
1429 | goto out_unlock; | 1429 | goto out_unlock; |
1430 | 1430 | ||
1431 | cgroup_taskset_for_each(task, tset) { | 1431 | cgroup_taskset_for_each(task, tset) { |
1432 | /* | 1432 | ret = task_can_attach(task, cs->cpus_allowed); |
1433 | * Kthreads which disallow setaffinity shouldn't be moved | 1433 | if (ret) |
1434 | * to a new cpuset; we don't want to change their cpu | ||
1435 | * affinity and isolating such threads by their set of | ||
1436 | * allowed nodes is unnecessary. Thus, cpusets are not | ||
1437 | * applicable for such threads. This prevents checking for | ||
1438 | * success of set_cpus_allowed_ptr() on all attached tasks | ||
1439 | * before cpus_allowed may be changed. | ||
1440 | */ | ||
1441 | ret = -EINVAL; | ||
1442 | if (task->flags & PF_NO_SETAFFINITY) | ||
1443 | goto out_unlock; | 1434 | goto out_unlock; |
1444 | ret = security_task_setscheduler(task); | 1435 | ret = security_task_setscheduler(task); |
1445 | if (ret) | 1436 | if (ret) |