diff options
author | Ingo Molnar <mingo@kernel.org> | 2018-03-03 08:01:12 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-03-03 09:50:21 -0500 |
commit | 97fb7a0a8944bd6d2c5634e1e0fa689a5c40bc22 (patch) | |
tree | 4993de40ba9dc0cf76d2233b8292a771d8c41941 /kernel/sched/loadavg.c | |
parent | c2e513821d5df5e772287f6d0c23fd17b7c2bb1a (diff) |
sched: Clean up and harmonize the coding style of the scheduler code base
A good number of small style inconsistencies have accumulated
in the scheduler core, so do a pass over them to harmonize
all these details:
- fix speling in comments,
- use curly braces for multi-line statements,
- remove unnecessary parentheses from integer literals,
- capitalize consistently,
- remove stray newlines,
- add comments where necessary,
- remove invalid/unnecessary comments,
- align structure definitions and other data types vertically,
- add missing newlines for increased readability,
- fix vertical tabulation where it's misaligned,
- harmonize preprocessor conditional block labeling
and vertical alignment,
- remove line-breaks where they uglify the code,
- add newline after local variable definitions,
No change in functionality:
md5:
1191fa0a890cfa8132156d2959d7e9e2 built-in.o.before.asm
1191fa0a890cfa8132156d2959d7e9e2 built-in.o.after.asm
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/loadavg.c')
-rw-r--r-- | kernel/sched/loadavg.c | 30 |
1 files changed, 15 insertions, 15 deletions
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index 89a989e4d758..a398e7e28a8a 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c | |||
@@ -32,29 +32,29 @@ | |||
32 | * Due to a number of reasons the above turns in the mess below: | 32 | * Due to a number of reasons the above turns in the mess below: |
33 | * | 33 | * |
34 | * - for_each_possible_cpu() is prohibitively expensive on machines with | 34 | * - for_each_possible_cpu() is prohibitively expensive on machines with |
35 | * serious number of cpus, therefore we need to take a distributed approach | 35 | * serious number of CPUs, therefore we need to take a distributed approach |
36 | * to calculating nr_active. | 36 | * to calculating nr_active. |
37 | * | 37 | * |
38 | * \Sum_i x_i(t) = \Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 0 | 38 | * \Sum_i x_i(t) = \Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 0 |
39 | * = \Sum_i { \Sum_j=1 x_i(t_j) - x_i(t_j-1) } | 39 | * = \Sum_i { \Sum_j=1 x_i(t_j) - x_i(t_j-1) } |
40 | * | 40 | * |
41 | * So assuming nr_active := 0 when we start out -- true per definition, we | 41 | * So assuming nr_active := 0 when we start out -- true per definition, we |
42 | * can simply take per-cpu deltas and fold those into a global accumulate | 42 | * can simply take per-CPU deltas and fold those into a global accumulate |
43 | * to obtain the same result. See calc_load_fold_active(). | 43 | * to obtain the same result. See calc_load_fold_active(). |
44 | * | 44 | * |
45 | * Furthermore, in order to avoid synchronizing all per-cpu delta folding | 45 | * Furthermore, in order to avoid synchronizing all per-CPU delta folding |
46 | * across the machine, we assume 10 ticks is sufficient time for every | 46 | * across the machine, we assume 10 ticks is sufficient time for every |
47 | * cpu to have completed this task. | 47 | * CPU to have completed this task. |
48 | * | 48 | * |
49 | * This places an upper-bound on the IRQ-off latency of the machine. Then | 49 | * This places an upper-bound on the IRQ-off latency of the machine. Then |
50 | * again, being late doesn't loose the delta, just wrecks the sample. | 50 | * again, being late doesn't loose the delta, just wrecks the sample. |
51 | * | 51 | * |
52 | * - cpu_rq()->nr_uninterruptible isn't accurately tracked per-cpu because | 52 | * - cpu_rq()->nr_uninterruptible isn't accurately tracked per-CPU because |
53 | * this would add another cross-cpu cacheline miss and atomic operation | 53 | * this would add another cross-CPU cacheline miss and atomic operation |
54 | * to the wakeup path. Instead we increment on whatever cpu the task ran | 54 | * to the wakeup path. Instead we increment on whatever CPU the task ran |
55 | * when it went into uninterruptible state and decrement on whatever cpu | 55 | * when it went into uninterruptible state and decrement on whatever CPU |
56 | * did the wakeup. This means that only the sum of nr_uninterruptible over | 56 | * did the wakeup. This means that only the sum of nr_uninterruptible over |
57 | * all cpus yields the correct result. | 57 | * all CPUs yields the correct result. |
58 | * | 58 | * |
59 | * This covers the NO_HZ=n code, for extra head-aches, see the comment below. | 59 | * This covers the NO_HZ=n code, for extra head-aches, see the comment below. |
60 | */ | 60 | */ |
@@ -115,11 +115,11 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active) | |||
115 | * Handle NO_HZ for the global load-average. | 115 | * Handle NO_HZ for the global load-average. |
116 | * | 116 | * |
117 | * Since the above described distributed algorithm to compute the global | 117 | * Since the above described distributed algorithm to compute the global |
118 | * load-average relies on per-cpu sampling from the tick, it is affected by | 118 | * load-average relies on per-CPU sampling from the tick, it is affected by |
119 | * NO_HZ. | 119 | * NO_HZ. |
120 | * | 120 | * |
121 | * The basic idea is to fold the nr_active delta into a global NO_HZ-delta upon | 121 | * The basic idea is to fold the nr_active delta into a global NO_HZ-delta upon |
122 | * entering NO_HZ state such that we can include this as an 'extra' cpu delta | 122 | * entering NO_HZ state such that we can include this as an 'extra' CPU delta |
123 | * when we read the global state. | 123 | * when we read the global state. |
124 | * | 124 | * |
125 | * Obviously reality has to ruin such a delightfully simple scheme: | 125 | * Obviously reality has to ruin such a delightfully simple scheme: |
@@ -146,9 +146,9 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active) | |||
146 | * busy state. | 146 | * busy state. |
147 | * | 147 | * |
148 | * This is solved by pushing the window forward, and thus skipping the | 148 | * This is solved by pushing the window forward, and thus skipping the |
149 | * sample, for this cpu (effectively using the NO_HZ-delta for this cpu which | 149 | * sample, for this CPU (effectively using the NO_HZ-delta for this CPU which |
150 | * was in effect at the time the window opened). This also solves the issue | 150 | * was in effect at the time the window opened). This also solves the issue |
151 | * of having to deal with a cpu having been in NO_HZ for multiple LOAD_FREQ | 151 | * of having to deal with a CPU having been in NO_HZ for multiple LOAD_FREQ |
152 | * intervals. | 152 | * intervals. |
153 | * | 153 | * |
154 | * When making the ILB scale, we should try to pull this in as well. | 154 | * When making the ILB scale, we should try to pull this in as well. |
@@ -299,7 +299,7 @@ calc_load_n(unsigned long load, unsigned long exp, | |||
299 | } | 299 | } |
300 | 300 | ||
301 | /* | 301 | /* |
302 | * NO_HZ can leave us missing all per-cpu ticks calling | 302 | * NO_HZ can leave us missing all per-CPU ticks calling |
303 | * calc_load_fold_active(), but since a NO_HZ CPU folds its delta into | 303 | * calc_load_fold_active(), but since a NO_HZ CPU folds its delta into |
304 | * calc_load_nohz per calc_load_nohz_start(), all we need to do is fold | 304 | * calc_load_nohz per calc_load_nohz_start(), all we need to do is fold |
305 | * in the pending NO_HZ delta if our NO_HZ period crossed a load cycle boundary. | 305 | * in the pending NO_HZ delta if our NO_HZ period crossed a load cycle boundary. |
@@ -363,7 +363,7 @@ void calc_global_load(unsigned long ticks) | |||
363 | return; | 363 | return; |
364 | 364 | ||
365 | /* | 365 | /* |
366 | * Fold the 'old' NO_HZ-delta to include all NO_HZ cpus. | 366 | * Fold the 'old' NO_HZ-delta to include all NO_HZ CPUs. |
367 | */ | 367 | */ |
368 | delta = calc_load_nohz_fold(); | 368 | delta = calc_load_nohz_fold(); |
369 | if (delta) | 369 | if (delta) |