diff options
author | Martin KaFai Lau <kafai@fb.com> | 2019-01-30 21:12:45 -0500 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2019-01-31 17:18:21 -0500 |
commit | 7c4cd051add3d00bbff008a133c936c515eaa8fe (patch) | |
tree | c3cc465c280e671192beb24504d09d2046567ecc /kernel/bpf/syscall.c | |
parent | e16ec34039c701594d55d08a5aa49ee3e1abc821 (diff) |
bpf: Fix syscall's stackmap lookup potential deadlock
The map_lookup_elem used to not acquiring spinlock
in order to optimize the reader.
It was true until commit 557c0c6e7df8 ("bpf: convert stackmap to pre-allocation")
The syscall's map_lookup_elem(stackmap) calls bpf_stackmap_copy().
bpf_stackmap_copy() may find the elem no longer needed after the copy is done.
If that is the case, pcpu_freelist_push() saves this elem for reuse later.
This push requires a spinlock.
If a tracing bpf_prog got run in the middle of the syscall's
map_lookup_elem(stackmap) and this tracing bpf_prog is calling
bpf_get_stackid(stackmap) which also requires the same pcpu_freelist's
spinlock, it may end up with a dead lock situation as reported by
Eric Dumazet in https://patchwork.ozlabs.org/patch/1030266/
The situation is the same as the syscall's map_update_elem() which
needs to acquire the pcpu_freelist's spinlock and could race
with tracing bpf_prog. Hence, this patch fixes it by protecting
bpf_stackmap_copy() with this_cpu_inc(bpf_prog_active)
to prevent tracing bpf_prog from running.
A later syscall's map_lookup_elem commit f1a2e44a3aec ("bpf: add queue and stack maps")
also acquires a spinlock and races with tracing bpf_prog similarly.
Hence, this patch is forward looking and protects the majority
of the map lookups. bpf_map_offload_lookup_elem() is the exception
since it is for network bpf_prog only (i.e. never called by tracing
bpf_prog).
Fixes: 557c0c6e7df8 ("bpf: convert stackmap to pre-allocation")
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'kernel/bpf/syscall.c')
-rw-r--r-- | kernel/bpf/syscall.c | 12 |
1 files changed, 10 insertions, 2 deletions
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index b155cd17c1bd..8577bb7f8be6 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c | |||
@@ -713,8 +713,13 @@ static int map_lookup_elem(union bpf_attr *attr) | |||
713 | 713 | ||
714 | if (bpf_map_is_dev_bound(map)) { | 714 | if (bpf_map_is_dev_bound(map)) { |
715 | err = bpf_map_offload_lookup_elem(map, key, value); | 715 | err = bpf_map_offload_lookup_elem(map, key, value); |
716 | } else if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || | 716 | goto done; |
717 | map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { | 717 | } |
718 | |||
719 | preempt_disable(); | ||
720 | this_cpu_inc(bpf_prog_active); | ||
721 | if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || | ||
722 | map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { | ||
718 | err = bpf_percpu_hash_copy(map, key, value); | 723 | err = bpf_percpu_hash_copy(map, key, value); |
719 | } else if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) { | 724 | } else if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) { |
720 | err = bpf_percpu_array_copy(map, key, value); | 725 | err = bpf_percpu_array_copy(map, key, value); |
@@ -744,7 +749,10 @@ static int map_lookup_elem(union bpf_attr *attr) | |||
744 | } | 749 | } |
745 | rcu_read_unlock(); | 750 | rcu_read_unlock(); |
746 | } | 751 | } |
752 | this_cpu_dec(bpf_prog_active); | ||
753 | preempt_enable(); | ||
747 | 754 | ||
755 | done: | ||
748 | if (err) | 756 | if (err) |
749 | goto free_value; | 757 | goto free_value; |
750 | 758 | ||