diff options
author | Mike Galbraith <efault@gmx.de> | 2016-11-23 05:33:37 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-11-23 23:45:02 -0500 |
commit | 83929cce95251cc77e5659bf493bd424ae0e7a67 (patch) | |
tree | f781b56f915b5f0e39f49496e9ebc683a0bdab7a /kernel | |
parent | 10b9dd56860e93f11cd352e8c75a33357b80b70b (diff) |
sched/autogroup: Fix 64-bit kernel nice level adjustment
Michael Kerrisk reported:
> Regarding the previous paragraph... My tests indicate
> that writing *any* value to the autogroup [nice priority level]
> file causes the task group to get a lower priority.
Because autogroup didn't call the then meaningless scale_load()...
Autogroup nice level adjustment has been broken ever since load
resolution was increased for 64-bit kernels. Use scale_load() to
scale group weight.
Michael Kerrisk tested this patch to fix the problem:
> Applied and tested against 4.9-rc6 on an Intel u7 (4 cores).
> Test setup:
>
> Terminal window 1: running 40 CPU burner jobs
> Terminal window 2: running 40 CPU burner jobs
> Terminal window 1: running 1 CPU burner job
>
> Demonstrated that:
> * Writing "0" to the autogroup file for TW1 now causes no change
> to the rate at which the process on the terminal consume CPU.
> * Writing -20 to the autogroup file for TW1 caused those processes
> to get the lion's share of CPU while TW2 TW3 get a tiny amount.
> * Writing -20 to the autogroup files for TW1 and TW3 allowed the
> process on TW3 to get as much CPU as it was getting as when
> the autogroup nice values for both terminals were 0.
Reported-by: Michael Kerrisk <mtk.manpages@gmail.com>
Tested-by: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-man <linux-man@vger.kernel.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1479897217.4306.6.camel@gmx.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/auto_group.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c index f1c8fd566246..da39489d2d80 100644 --- a/kernel/sched/auto_group.c +++ b/kernel/sched/auto_group.c | |||
@@ -212,6 +212,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice) | |||
212 | { | 212 | { |
213 | static unsigned long next = INITIAL_JIFFIES; | 213 | static unsigned long next = INITIAL_JIFFIES; |
214 | struct autogroup *ag; | 214 | struct autogroup *ag; |
215 | unsigned long shares; | ||
215 | int err; | 216 | int err; |
216 | 217 | ||
217 | if (nice < MIN_NICE || nice > MAX_NICE) | 218 | if (nice < MIN_NICE || nice > MAX_NICE) |
@@ -230,9 +231,10 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice) | |||
230 | 231 | ||
231 | next = HZ / 10 + jiffies; | 232 | next = HZ / 10 + jiffies; |
232 | ag = autogroup_task_get(p); | 233 | ag = autogroup_task_get(p); |
234 | shares = scale_load(sched_prio_to_weight[nice + 20]); | ||
233 | 235 | ||
234 | down_write(&ag->lock); | 236 | down_write(&ag->lock); |
235 | err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]); | 237 | err = sched_group_set_shares(ag->tg, shares); |
236 | if (!err) | 238 | if (!err) |
237 | ag->nice = nice; | 239 | ag->nice = nice; |
238 | up_write(&ag->lock); | 240 | up_write(&ag->lock); |