diff options
author | Peter Zijlstra <peterz@infradead.org> | 2017-08-23 07:23:30 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-08-25 05:06:33 -0400 |
commit | e6f3faa734a00c606b7b06c6b9f15e5627d3245b (patch) | |
tree | 4c3f0047d1fa1796442512e03147c40c026f25a8 /kernel/workqueue.c | |
parent | a1d14934ea4b9db816a8dbfeab1c3e7204a0d871 (diff) |
locking/lockdep: Fix workqueue crossrelease annotation
The new completion/crossrelease annotations interact unfavourable with
the extant flush_work()/flush_workqueue() annotations.
The problem is that when a single work class does:
wait_for_completion(&C)
and
complete(&C)
in different executions, we'll build dependencies like:
lock_map_acquire(W)
complete_acquire(C)
and
lock_map_acquire(W)
complete_release(C)
which results in the dependency chain: W->C->W, which lockdep thinks
spells deadlock, even though there is no deadlock potential since
works are ran concurrently.
One possibility would be to change the work 'lock' to recursive-read,
but that would mean hitting a lockdep limitation on recursive locks.
Also, unconditinoally switching to recursive-read here would fail to
detect the actual deadlock on single-threaded workqueues, which do
have a problem with this.
For now, forcefully disregard these locks for crossrelease.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: byungchul.park@lge.com
Cc: david@fromorbit.com
Cc: johannes@sipsolutions.net
Cc: oleg@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/workqueue.c')
-rw-r--r-- | kernel/workqueue.c | 23 |
1 files changed, 22 insertions, 1 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 8ad214dc15a9..c0331891dec1 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c | |||
@@ -2093,7 +2093,28 @@ __acquires(&pool->lock) | |||
2093 | 2093 | ||
2094 | lock_map_acquire(&pwq->wq->lockdep_map); | 2094 | lock_map_acquire(&pwq->wq->lockdep_map); |
2095 | lock_map_acquire(&lockdep_map); | 2095 | lock_map_acquire(&lockdep_map); |
2096 | crossrelease_hist_start(XHLOCK_PROC); | 2096 | /* |
2097 | * Strictly speaking we should do start(PROC) without holding any | ||
2098 | * locks, that is, before these two lock_map_acquire()'s. | ||
2099 | * | ||
2100 | * However, that would result in: | ||
2101 | * | ||
2102 | * A(W1) | ||
2103 | * WFC(C) | ||
2104 | * A(W1) | ||
2105 | * C(C) | ||
2106 | * | ||
2107 | * Which would create W1->C->W1 dependencies, even though there is no | ||
2108 | * actual deadlock possible. There are two solutions, using a | ||
2109 | * read-recursive acquire on the work(queue) 'locks', but this will then | ||
2110 | * hit the lockdep limitation on recursive locks, or simly discard | ||
2111 | * these locks. | ||
2112 | * | ||
2113 | * AFAICT there is no possible deadlock scenario between the | ||
2114 | * flush_work() and complete() primitives (except for single-threaded | ||
2115 | * workqueues), so hiding them isn't a problem. | ||
2116 | */ | ||
2117 | crossrelease_hist_start(XHLOCK_PROC, true); | ||
2097 | trace_workqueue_execute_start(work); | 2118 | trace_workqueue_execute_start(work); |
2098 | worker->current_func(work); | 2119 | worker->current_func(work); |
2099 | /* | 2120 | /* |