diff options
author | Jason A. Donenfeld <Jason@zx2c4.com> | 2017-03-23 07:24:43 -0400 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2017-03-24 09:51:33 -0400 |
commit | de5540d088fe97ad583cc7d396586437b32149a5 (patch) | |
tree | 187a633220dafdca673c76a17b7e84a8a9ec6a75 /kernel/padata.c | |
parent | 74d1cf4897f919837efc4e34d800b996936eb38e (diff) |
padata: avoid race in reordering
Under extremely heavy uses of padata, crashes occur, and with list
debugging turned on, this happens instead:
[87487.298728] WARNING: CPU: 1 PID: 882 at lib/list_debug.c:33
__list_add+0xae/0x130
[87487.301868] list_add corruption. prev->next should be next
(ffffb17abfc043d0), but was ffff8dba70872c80. (prev=ffff8dba70872b00).
[87487.339011] [<ffffffff9a53d075>] dump_stack+0x68/0xa3
[87487.342198] [<ffffffff99e119a1>] ? console_unlock+0x281/0x6d0
[87487.345364] [<ffffffff99d6b91f>] __warn+0xff/0x140
[87487.348513] [<ffffffff99d6b9aa>] warn_slowpath_fmt+0x4a/0x50
[87487.351659] [<ffffffff9a58b5de>] __list_add+0xae/0x130
[87487.354772] [<ffffffff9add5094>] ? _raw_spin_lock+0x64/0x70
[87487.357915] [<ffffffff99eefd66>] padata_reorder+0x1e6/0x420
[87487.361084] [<ffffffff99ef0055>] padata_do_serial+0xa5/0x120
padata_reorder calls list_add_tail with the list to which its adding
locked, which seems correct:
spin_lock(&squeue->serial.lock);
list_add_tail(&padata->list, &squeue->serial.list);
spin_unlock(&squeue->serial.lock);
This therefore leaves only place where such inconsistency could occur:
if padata->list is added at the same time on two different threads.
This pdata pointer comes from the function call to
padata_get_next(pd), which has in it the following block:
next_queue = per_cpu_ptr(pd->pqueue, cpu);
padata = NULL;
reorder = &next_queue->reorder;
if (!list_empty(&reorder->list)) {
padata = list_entry(reorder->list.next,
struct padata_priv, list);
spin_lock(&reorder->lock);
list_del_init(&padata->list);
atomic_dec(&pd->reorder_objects);
spin_unlock(&reorder->lock);
pd->processed++;
goto out;
}
out:
return padata;
I strongly suspect that the problem here is that two threads can race
on reorder list. Even though the deletion is locked, call to
list_entry is not locked, which means it's feasible that two threads
pick up the same padata object and subsequently call list_add_tail on
them at the same time. The fix is thus be hoist that lock outside of
that block.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'kernel/padata.c')
-rw-r--r-- | kernel/padata.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/padata.c b/kernel/padata.c index 05316c9f32da..3202aa17492c 100644 --- a/kernel/padata.c +++ b/kernel/padata.c | |||
@@ -186,19 +186,20 @@ static struct padata_priv *padata_get_next(struct parallel_data *pd) | |||
186 | 186 | ||
187 | reorder = &next_queue->reorder; | 187 | reorder = &next_queue->reorder; |
188 | 188 | ||
189 | spin_lock(&reorder->lock); | ||
189 | if (!list_empty(&reorder->list)) { | 190 | if (!list_empty(&reorder->list)) { |
190 | padata = list_entry(reorder->list.next, | 191 | padata = list_entry(reorder->list.next, |
191 | struct padata_priv, list); | 192 | struct padata_priv, list); |
192 | 193 | ||
193 | spin_lock(&reorder->lock); | ||
194 | list_del_init(&padata->list); | 194 | list_del_init(&padata->list); |
195 | atomic_dec(&pd->reorder_objects); | 195 | atomic_dec(&pd->reorder_objects); |
196 | spin_unlock(&reorder->lock); | ||
197 | 196 | ||
198 | pd->processed++; | 197 | pd->processed++; |
199 | 198 | ||
199 | spin_unlock(&reorder->lock); | ||
200 | goto out; | 200 | goto out; |
201 | } | 201 | } |
202 | spin_unlock(&reorder->lock); | ||
202 | 203 | ||
203 | if (__this_cpu_read(pd->pqueue->cpu_index) == next_queue->cpu_index) { | 204 | if (__this_cpu_read(pd->pqueue->cpu_index) == next_queue->cpu_index) { |
204 | padata = ERR_PTR(-ENODATA); | 205 | padata = ERR_PTR(-ENODATA); |