diff options
author | Frank Mayhar <fmayhar@google.com> | 2008-09-12 12:54:39 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-09-14 10:25:35 -0400 |
commit | f06febc96ba8e0af80bcc3eaec0a109e88275fac (patch) | |
tree | 46dba9432ef25d2eae9434ff2df638c7a268c0f1 /include/linux/sched.h | |
parent | 6bfb09a1005193be5c81ebac9f3ef85210142650 (diff) |
timers: fix itimer/many thread hang
Overview
This patch reworks the handling of POSIX CPU timers, including the
ITIMER_PROF, ITIMER_VIRT timers and rlimit handling. It was put together
with the help of Roland McGrath, the owner and original writer of this code.
The problem we ran into, and the reason for this rework, has to do with using
a profiling timer in a process with a large number of threads. It appears
that the performance of the old implementation of run_posix_cpu_timers() was
at least O(n*3) (where "n" is the number of threads in a process) or worse.
Everything is fine with an increasing number of threads until the time taken
for that routine to run becomes the same as or greater than the tick time, at
which point things degrade rather quickly.
This patch fixes bug 9906, "Weird hang with NPTL and SIGPROF."
Code Changes
This rework corrects the implementation of run_posix_cpu_timers() to make it
run in constant time for a particular machine. (Performance may vary between
one machine and another depending upon whether the kernel is built as single-
or multiprocessor and, in the latter case, depending upon the number of
running processors.) To do this, at each tick we now update fields in
signal_struct as well as task_struct. The run_posix_cpu_timers() function
uses those fields to make its decisions.
We define a new structure, "task_cputime," to contain user, system and
scheduler times and use these in appropriate places:
struct task_cputime {
cputime_t utime;
cputime_t stime;
unsigned long long sum_exec_runtime;
};
This is included in the structure "thread_group_cputime," which is a new
substructure of signal_struct and which varies for uniprocessor versus
multiprocessor kernels. For uniprocessor kernels, it uses "task_cputime" as
a simple substructure, while for multiprocessor kernels it is a pointer:
struct thread_group_cputime {
struct task_cputime totals;
};
struct thread_group_cputime {
struct task_cputime *totals;
};
We also add a new task_cputime substructure directly to signal_struct, to
cache the earliest expiration of process-wide timers, and task_cputime also
replaces the it_*_expires fields of task_struct (used for earliest expiration
of thread timers). The "thread_group_cputime" structure contains process-wide
timers that are updated via account_user_time() and friends. In the non-SMP
case the structure is a simple aggregator; unfortunately in the SMP case that
simplicity was not achievable due to cache-line contention between CPUs (in
one measured case performance was actually _worse_ on a 16-cpu system than
the same test on a 4-cpu system, due to this contention). For SMP, the
thread_group_cputime counters are maintained as a per-cpu structure allocated
using alloc_percpu(). The timer functions update only the timer field in
the structure corresponding to the running CPU, obtained using per_cpu_ptr().
We define a set of inline functions in sched.h that we use to maintain the
thread_group_cputime structure and hide the differences between UP and SMP
implementations from the rest of the kernel. The thread_group_cputime_init()
function initializes the thread_group_cputime structure for the given task.
The thread_group_cputime_alloc() is a no-op for UP; for SMP it calls the
out-of-line function thread_group_cputime_alloc_smp() to allocate and fill
in the per-cpu structures and fields. The thread_group_cputime_free()
function, also a no-op for UP, in SMP frees the per-cpu structures. The
thread_group_cputime_clone_thread() function (also a UP no-op) for SMP calls
thread_group_cputime_alloc() if the per-cpu structures haven't yet been
allocated. The thread_group_cputime() function fills the task_cputime
structure it is passed with the contents of the thread_group_cputime fields;
in UP it's that simple but in SMP it must also safely check that tsk->signal
is non-NULL (if it is it just uses the appropriate fields of task_struct) and,
if so, sums the per-cpu values for each online CPU. Finally, the three
functions account_group_user_time(), account_group_system_time() and
account_group_exec_runtime() are used by timer functions to update the
respective fields of the thread_group_cputime structure.
Non-SMP operation is trivial and will not be mentioned further.
The per-cpu structure is always allocated when a task creates its first new
thread, via a call to thread_group_cputime_clone_thread() from copy_signal().
It is freed at process exit via a call to thread_group_cputime_free() from
cleanup_signal().
All functions that formerly summed utime/stime/sum_sched_runtime values from
from all threads in the thread group now use thread_group_cputime() to
snapshot the values in the thread_group_cputime structure or the values in
the task structure itself if the per-cpu structure hasn't been allocated.
Finally, the code in kernel/posix-cpu-timers.c has changed quite a bit.
The run_posix_cpu_timers() function has been split into a fast path and a
slow path; the former safely checks whether there are any expired thread
timers and, if not, just returns, while the slow path does the heavy lifting.
With the dedicated thread group fields, timers are no longer "rebalanced" and
the process_timer_rebalance() function and related code has gone away. All
summing loops are gone and all code that used them now uses the
thread_group_cputime() inline. When process-wide timers are set, the new
task_cputime structure in signal_struct is used to cache the earliest
expiration; this is checked in the fast path.
Performance
The fix appears not to add significant overhead to existing operations. It
generally performs the same as the current code except in two cases, one in
which it performs slightly worse (Case 5 below) and one in which it performs
very significantly better (Case 2 below). Overall it's a wash except in those
two cases.
I've since done somewhat more involved testing on a dual-core Opteron system.
Case 1: With no itimer running, for a test with 100,000 threads, the fixed
kernel took 1428.5 seconds, 513 seconds more than the unfixed system,
all of which was spent in the system. There were twice as many
voluntary context switches with the fix as without it.
Case 2: With an itimer running at .01 second ticks and 4000 threads (the most
an unmodified kernel can handle), the fixed kernel ran the test in
eight percent of the time (5.8 seconds as opposed to 70 seconds) and
had better tick accuracy (.012 seconds per tick as opposed to .023
seconds per tick).
Case 3: A 4000-thread test with an initial timer tick of .01 second and an
interval of 10,000 seconds (i.e. a timer that ticks only once) had
very nearly the same performance in both cases: 6.3 seconds elapsed
for the fixed kernel versus 5.5 seconds for the unfixed kernel.
With fewer threads (eight in these tests), the Case 1 test ran in essentially
the same time on both the modified and unmodified kernels (5.2 seconds versus
5.8 seconds). The Case 2 test ran in about the same time as well, 5.9 seconds
versus 5.4 seconds but again with much better tick accuracy, .013 seconds per
tick versus .025 seconds per tick for the unmodified kernel.
Since the fix affected the rlimit code, I also tested soft and hard CPU limits.
Case 4: With a hard CPU limit of 20 seconds and eight threads (and an itimer
running), the modified kernel was very slightly favored in that while
it killed the process in 19.997 seconds of CPU time (5.002 seconds of
wall time), only .003 seconds of that was system time, the rest was
user time. The unmodified kernel killed the process in 20.001 seconds
of CPU (5.014 seconds of wall time) of which .016 seconds was system
time. Really, though, the results were too close to call. The results
were essentially the same with no itimer running.
Case 5: With a soft limit of 20 seconds and a hard limit of 2000 seconds
(where the hard limit would never be reached) and an itimer running,
the modified kernel exhibited worse tick accuracy than the unmodified
kernel: .050 seconds/tick versus .028 seconds/tick. Otherwise,
performance was almost indistinguishable. With no itimer running this
test exhibited virtually identical behavior and times in both cases.
In times past I did some limited performance testing. those results are below.
On a four-cpu Opteron system without this fix, a sixteen-thread test executed
in 3569.991 seconds, of which user was 3568.435s and system was 1.556s. On
the same system with the fix, user and elapsed time were about the same, but
system time dropped to 0.007 seconds. Performance with eight, four and one
thread were comparable. Interestingly, the timer ticks with the fix seemed
more accurate: The sixteen-thread test with the fix received 149543 ticks
for 0.024 seconds per tick, while the same test without the fix received 58720
for 0.061 seconds per tick. Both cases were configured for an interval of
0.01 seconds. Again, the other tests were comparable. Each thread in this
test computed the primes up to 25,000,000.
I also did a test with a large number of threads, 100,000 threads, which is
impossible without the fix. In this case each thread computed the primes only
up to 10,000 (to make the runtime manageable). System time dominated, at
1546.968 seconds out of a total 2176.906 seconds (giving a user time of
629.938s). It received 147651 ticks for 0.015 seconds per tick, still quite
accurate. There is obviously no comparable test without the fix.
Signed-off-by: Frank Mayhar <fmayhar@google.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r-- | include/linux/sched.h | 257 |
1 files changed, 244 insertions, 13 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 3d9120c5ad15..26d7a5f2d0ba 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h | |||
@@ -425,6 +425,45 @@ struct pacct_struct { | |||
425 | unsigned long ac_minflt, ac_majflt; | 425 | unsigned long ac_minflt, ac_majflt; |
426 | }; | 426 | }; |
427 | 427 | ||
428 | /** | ||
429 | * struct task_cputime - collected CPU time counts | ||
430 | * @utime: time spent in user mode, in &cputime_t units | ||
431 | * @stime: time spent in kernel mode, in &cputime_t units | ||
432 | * @sum_exec_runtime: total time spent on the CPU, in nanoseconds | ||
433 | * | ||
434 | * This structure groups together three kinds of CPU time that are | ||
435 | * tracked for threads and thread groups. Most things considering | ||
436 | * CPU time want to group these counts together and treat all three | ||
437 | * of them in parallel. | ||
438 | */ | ||
439 | struct task_cputime { | ||
440 | cputime_t utime; | ||
441 | cputime_t stime; | ||
442 | unsigned long long sum_exec_runtime; | ||
443 | }; | ||
444 | /* Alternate field names when used to cache expirations. */ | ||
445 | #define prof_exp stime | ||
446 | #define virt_exp utime | ||
447 | #define sched_exp sum_exec_runtime | ||
448 | |||
449 | /** | ||
450 | * struct thread_group_cputime - thread group interval timer counts | ||
451 | * @totals: thread group interval timers; substructure for | ||
452 | * uniprocessor kernel, per-cpu for SMP kernel. | ||
453 | * | ||
454 | * This structure contains the version of task_cputime, above, that is | ||
455 | * used for thread group CPU clock calculations. | ||
456 | */ | ||
457 | #ifdef CONFIG_SMP | ||
458 | struct thread_group_cputime { | ||
459 | struct task_cputime *totals; | ||
460 | }; | ||
461 | #else | ||
462 | struct thread_group_cputime { | ||
463 | struct task_cputime totals; | ||
464 | }; | ||
465 | #endif | ||
466 | |||
428 | /* | 467 | /* |
429 | * NOTE! "signal_struct" does not have it's own | 468 | * NOTE! "signal_struct" does not have it's own |
430 | * locking, because a shared signal_struct always | 469 | * locking, because a shared signal_struct always |
@@ -470,6 +509,17 @@ struct signal_struct { | |||
470 | cputime_t it_prof_expires, it_virt_expires; | 509 | cputime_t it_prof_expires, it_virt_expires; |
471 | cputime_t it_prof_incr, it_virt_incr; | 510 | cputime_t it_prof_incr, it_virt_incr; |
472 | 511 | ||
512 | /* | ||
513 | * Thread group totals for process CPU clocks. | ||
514 | * See thread_group_cputime(), et al, for details. | ||
515 | */ | ||
516 | struct thread_group_cputime cputime; | ||
517 | |||
518 | /* Earliest-expiration cache. */ | ||
519 | struct task_cputime cputime_expires; | ||
520 | |||
521 | struct list_head cpu_timers[3]; | ||
522 | |||
473 | /* job control IDs */ | 523 | /* job control IDs */ |
474 | 524 | ||
475 | /* | 525 | /* |
@@ -500,7 +550,7 @@ struct signal_struct { | |||
500 | * Live threads maintain their own counters and add to these | 550 | * Live threads maintain their own counters and add to these |
501 | * in __exit_signal, except for the group leader. | 551 | * in __exit_signal, except for the group leader. |
502 | */ | 552 | */ |
503 | cputime_t utime, stime, cutime, cstime; | 553 | cputime_t cutime, cstime; |
504 | cputime_t gtime; | 554 | cputime_t gtime; |
505 | cputime_t cgtime; | 555 | cputime_t cgtime; |
506 | unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw; | 556 | unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw; |
@@ -509,14 +559,6 @@ struct signal_struct { | |||
509 | struct task_io_accounting ioac; | 559 | struct task_io_accounting ioac; |
510 | 560 | ||
511 | /* | 561 | /* |
512 | * Cumulative ns of scheduled CPU time for dead threads in the | ||
513 | * group, not including a zombie group leader. (This only differs | ||
514 | * from jiffies_to_ns(utime + stime) if sched_clock uses something | ||
515 | * other than jiffies.) | ||
516 | */ | ||
517 | unsigned long long sum_sched_runtime; | ||
518 | |||
519 | /* | ||
520 | * We don't bother to synchronize most readers of this at all, | 562 | * We don't bother to synchronize most readers of this at all, |
521 | * because there is no reader checking a limit that actually needs | 563 | * because there is no reader checking a limit that actually needs |
522 | * to get both rlim_cur and rlim_max atomically, and either one | 564 | * to get both rlim_cur and rlim_max atomically, and either one |
@@ -527,8 +569,6 @@ struct signal_struct { | |||
527 | */ | 569 | */ |
528 | struct rlimit rlim[RLIM_NLIMITS]; | 570 | struct rlimit rlim[RLIM_NLIMITS]; |
529 | 571 | ||
530 | struct list_head cpu_timers[3]; | ||
531 | |||
532 | /* keep the process-shared keyrings here so that they do the right | 572 | /* keep the process-shared keyrings here so that they do the right |
533 | * thing in threads created with CLONE_THREAD */ | 573 | * thing in threads created with CLONE_THREAD */ |
534 | #ifdef CONFIG_KEYS | 574 | #ifdef CONFIG_KEYS |
@@ -1134,8 +1174,7 @@ struct task_struct { | |||
1134 | /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */ | 1174 | /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */ |
1135 | unsigned long min_flt, maj_flt; | 1175 | unsigned long min_flt, maj_flt; |
1136 | 1176 | ||
1137 | cputime_t it_prof_expires, it_virt_expires; | 1177 | struct task_cputime cputime_expires; |
1138 | unsigned long long it_sched_expires; | ||
1139 | struct list_head cpu_timers[3]; | 1178 | struct list_head cpu_timers[3]; |
1140 | 1179 | ||
1141 | /* process credentials */ | 1180 | /* process credentials */ |
@@ -1585,6 +1624,7 @@ extern unsigned long long cpu_clock(int cpu); | |||
1585 | 1624 | ||
1586 | extern unsigned long long | 1625 | extern unsigned long long |
1587 | task_sched_runtime(struct task_struct *task); | 1626 | task_sched_runtime(struct task_struct *task); |
1627 | extern unsigned long long thread_group_sched_runtime(struct task_struct *task); | ||
1588 | 1628 | ||
1589 | /* sched_exec is called by processes performing an exec */ | 1629 | /* sched_exec is called by processes performing an exec */ |
1590 | #ifdef CONFIG_SMP | 1630 | #ifdef CONFIG_SMP |
@@ -2082,6 +2122,197 @@ static inline int spin_needbreak(spinlock_t *lock) | |||
2082 | } | 2122 | } |
2083 | 2123 | ||
2084 | /* | 2124 | /* |
2125 | * Thread group CPU time accounting. | ||
2126 | */ | ||
2127 | #ifdef CONFIG_SMP | ||
2128 | |||
2129 | extern int thread_group_cputime_alloc_smp(struct task_struct *); | ||
2130 | extern void thread_group_cputime_smp(struct task_struct *, struct task_cputime *); | ||
2131 | |||
2132 | static inline void thread_group_cputime_init(struct signal_struct *sig) | ||
2133 | { | ||
2134 | sig->cputime.totals = NULL; | ||
2135 | } | ||
2136 | |||
2137 | static inline int thread_group_cputime_clone_thread(struct task_struct *curr, | ||
2138 | struct task_struct *new) | ||
2139 | { | ||
2140 | if (curr->signal->cputime.totals) | ||
2141 | return 0; | ||
2142 | return thread_group_cputime_alloc_smp(curr); | ||
2143 | } | ||
2144 | |||
2145 | static inline void thread_group_cputime_free(struct signal_struct *sig) | ||
2146 | { | ||
2147 | free_percpu(sig->cputime.totals); | ||
2148 | } | ||
2149 | |||
2150 | /** | ||
2151 | * thread_group_cputime - Sum the thread group time fields across all CPUs. | ||
2152 | * | ||
2153 | * This is a wrapper for the real routine, thread_group_cputime_smp(). See | ||
2154 | * that routine for details. | ||
2155 | */ | ||
2156 | static inline void thread_group_cputime( | ||
2157 | struct task_struct *tsk, | ||
2158 | struct task_cputime *times) | ||
2159 | { | ||
2160 | thread_group_cputime_smp(tsk, times); | ||
2161 | } | ||
2162 | |||
2163 | /** | ||
2164 | * thread_group_cputime_account_user - Maintain utime for a thread group. | ||
2165 | * | ||
2166 | * @tgtimes: Pointer to thread_group_cputime structure. | ||
2167 | * @cputime: Time value by which to increment the utime field of that | ||
2168 | * structure. | ||
2169 | * | ||
2170 | * If thread group time is being maintained, get the structure for the | ||
2171 | * running CPU and update the utime field there. | ||
2172 | */ | ||
2173 | static inline void thread_group_cputime_account_user( | ||
2174 | struct thread_group_cputime *tgtimes, | ||
2175 | cputime_t cputime) | ||
2176 | { | ||
2177 | if (tgtimes->totals) { | ||
2178 | struct task_cputime *times; | ||
2179 | |||
2180 | times = per_cpu_ptr(tgtimes->totals, get_cpu()); | ||
2181 | times->utime = cputime_add(times->utime, cputime); | ||
2182 | put_cpu_no_resched(); | ||
2183 | } | ||
2184 | } | ||
2185 | |||
2186 | /** | ||
2187 | * thread_group_cputime_account_system - Maintain stime for a thread group. | ||
2188 | * | ||
2189 | * @tgtimes: Pointer to thread_group_cputime structure. | ||
2190 | * @cputime: Time value by which to increment the stime field of that | ||
2191 | * structure. | ||
2192 | * | ||
2193 | * If thread group time is being maintained, get the structure for the | ||
2194 | * running CPU and update the stime field there. | ||
2195 | */ | ||
2196 | static inline void thread_group_cputime_account_system( | ||
2197 | struct thread_group_cputime *tgtimes, | ||
2198 | cputime_t cputime) | ||
2199 | { | ||
2200 | if (tgtimes->totals) { | ||
2201 | struct task_cputime *times; | ||
2202 | |||
2203 | times = per_cpu_ptr(tgtimes->totals, get_cpu()); | ||
2204 | times->stime = cputime_add(times->stime, cputime); | ||
2205 | put_cpu_no_resched(); | ||
2206 | } | ||
2207 | } | ||
2208 | |||
2209 | /** | ||
2210 | * thread_group_cputime_account_exec_runtime - Maintain exec runtime for a | ||
2211 | * thread group. | ||
2212 | * | ||
2213 | * @tgtimes: Pointer to thread_group_cputime structure. | ||
2214 | * @ns: Time value by which to increment the sum_exec_runtime field | ||
2215 | * of that structure. | ||
2216 | * | ||
2217 | * If thread group time is being maintained, get the structure for the | ||
2218 | * running CPU and update the sum_exec_runtime field there. | ||
2219 | */ | ||
2220 | static inline void thread_group_cputime_account_exec_runtime( | ||
2221 | struct thread_group_cputime *tgtimes, | ||
2222 | unsigned long long ns) | ||
2223 | { | ||
2224 | if (tgtimes->totals) { | ||
2225 | struct task_cputime *times; | ||
2226 | |||
2227 | times = per_cpu_ptr(tgtimes->totals, get_cpu()); | ||
2228 | times->sum_exec_runtime += ns; | ||
2229 | put_cpu_no_resched(); | ||
2230 | } | ||
2231 | } | ||
2232 | |||
2233 | #else /* CONFIG_SMP */ | ||
2234 | |||
2235 | static inline void thread_group_cputime_init(struct signal_struct *sig) | ||
2236 | { | ||
2237 | sig->cputime.totals.utime = cputime_zero; | ||
2238 | sig->cputime.totals.stime = cputime_zero; | ||
2239 | sig->cputime.totals.sum_exec_runtime = 0; | ||
2240 | } | ||
2241 | |||
2242 | static inline int thread_group_cputime_alloc(struct task_struct *tsk) | ||
2243 | { | ||
2244 | return 0; | ||
2245 | } | ||
2246 | |||
2247 | static inline void thread_group_cputime_free(struct signal_struct *sig) | ||
2248 | { | ||
2249 | } | ||
2250 | |||
2251 | static inline int thread_group_cputime_clone_thread(struct task_struct *curr, | ||
2252 | struct task_struct *tsk) | ||
2253 | { | ||
2254 | } | ||
2255 | |||
2256 | static inline void thread_group_cputime(struct task_struct *tsk, | ||
2257 | struct task_cputime *cputime) | ||
2258 | { | ||
2259 | *cputime = tsk->signal->cputime.totals; | ||
2260 | } | ||
2261 | |||
2262 | static inline void thread_group_cputime_account_user( | ||
2263 | struct thread_group_cputime *tgtimes, | ||
2264 | cputime_t cputime) | ||
2265 | { | ||
2266 | tgtimes->totals->utime = cputime_add(tgtimes->totals->utime, cputime); | ||
2267 | } | ||
2268 | |||
2269 | static inline void thread_group_cputime_account_system( | ||
2270 | struct thread_group_cputime *tgtimes, | ||
2271 | cputime_t cputime) | ||
2272 | { | ||
2273 | tgtimes->totals->stime = cputime_add(tgtimes->totals->stime, cputime); | ||
2274 | } | ||
2275 | |||
2276 | static inline void thread_group_cputime_account_exec_runtime( | ||
2277 | struct thread_group_cputime *tgtimes, | ||
2278 | unsigned long long ns) | ||
2279 | { | ||
2280 | tgtimes->totals->sum_exec_runtime += ns; | ||
2281 | } | ||
2282 | |||
2283 | #endif /* CONFIG_SMP */ | ||
2284 | |||
2285 | static inline void account_group_user_time(struct task_struct *tsk, | ||
2286 | cputime_t cputime) | ||
2287 | { | ||
2288 | struct signal_struct *sig; | ||
2289 | |||
2290 | sig = tsk->signal; | ||
2291 | if (likely(sig)) | ||
2292 | thread_group_cputime_account_user(&sig->cputime, cputime); | ||
2293 | } | ||
2294 | |||
2295 | static inline void account_group_system_time(struct task_struct *tsk, | ||
2296 | cputime_t cputime) | ||
2297 | { | ||
2298 | struct signal_struct *sig; | ||
2299 | |||
2300 | sig = tsk->signal; | ||
2301 | if (likely(sig)) | ||
2302 | thread_group_cputime_account_system(&sig->cputime, cputime); | ||
2303 | } | ||
2304 | |||
2305 | static inline void account_group_exec_runtime(struct task_struct *tsk, | ||
2306 | unsigned long long ns) | ||
2307 | { | ||
2308 | struct signal_struct *sig; | ||
2309 | |||
2310 | sig = tsk->signal; | ||
2311 | if (likely(sig)) | ||
2312 | thread_group_cputime_account_exec_runtime(&sig->cputime, ns); | ||
2313 | } | ||
2314 | |||
2315 | /* | ||
2085 | * Reevaluate whether the task has signals pending delivery. | 2316 | * Reevaluate whether the task has signals pending delivery. |
2086 | * Wake the task if so. | 2317 | * Wake the task if so. |
2087 | * This is required every time the blocked sigset_t changes. | 2318 | * This is required every time the blocked sigset_t changes. |