diff options
author | Yabin Cui <yabinc@google.com> | 2019-05-17 07:52:31 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-05-24 03:00:10 -0400 |
commit | 1b038c6e05ff70a1e66e3e571c2e6106bdb75f53 (patch) | |
tree | b9ca314ab80b10593d6392ea7946b5db8c648769 | |
parent | 23e3983a466cd540ffdd2bbc6e0c51e31934f941 (diff) |
perf/ring_buffer: Fix exposing a temporarily decreased data_head
In perf_output_put_handle(), an IRQ/NMI can happen in below location and
write records to the same ring buffer:
...
local_dec_and_test(&rb->nest)
... <-- an IRQ/NMI can happen here
rb->user_page->data_head = head;
...
In this case, a value A is written to data_head in the IRQ, then a value
B is written to data_head after the IRQ. And A > B. As a result,
data_head is temporarily decreased from A to B. And a reader may see
data_head < data_tail if it read the buffer frequently enough, which
creates unexpected behaviors.
This can be fixed by moving dec(&rb->nest) to after updating data_head,
which prevents the IRQ/NMI above from updating data_head.
[ Split up by peterz. ]
Signed-off-by: Yabin Cui <yabinc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: mark.rutland@arm.com
Fixes: ef60777c9abd ("perf: Optimize the perf_output() path by removing IRQ-disables")
Link: http://lkml.kernel.org/r/20190517115418.224478157@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | kernel/events/ring_buffer.c | 24 |
1 files changed, 20 insertions, 4 deletions
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 674b35383491..009467a60578 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c | |||
@@ -51,11 +51,18 @@ again: | |||
51 | head = local_read(&rb->head); | 51 | head = local_read(&rb->head); |
52 | 52 | ||
53 | /* | 53 | /* |
54 | * IRQ/NMI can happen here, which means we can miss a head update. | 54 | * IRQ/NMI can happen here and advance @rb->head, causing our |
55 | * load above to be stale. | ||
55 | */ | 56 | */ |
56 | 57 | ||
57 | if (!local_dec_and_test(&rb->nest)) | 58 | /* |
59 | * If this isn't the outermost nesting, we don't have to update | ||
60 | * @rb->user_page->data_head. | ||
61 | */ | ||
62 | if (local_read(&rb->nest) > 1) { | ||
63 | local_dec(&rb->nest); | ||
58 | goto out; | 64 | goto out; |
65 | } | ||
59 | 66 | ||
60 | /* | 67 | /* |
61 | * Since the mmap() consumer (userspace) can run on a different CPU: | 68 | * Since the mmap() consumer (userspace) can run on a different CPU: |
@@ -87,9 +94,18 @@ again: | |||
87 | rb->user_page->data_head = head; | 94 | rb->user_page->data_head = head; |
88 | 95 | ||
89 | /* | 96 | /* |
90 | * Now check if we missed an update -- rely on previous implied | 97 | * We must publish the head before decrementing the nest count, |
91 | * compiler barriers to force a re-read. | 98 | * otherwise an IRQ/NMI can publish a more recent head value and our |
99 | * write will (temporarily) publish a stale value. | ||
100 | */ | ||
101 | barrier(); | ||
102 | local_set(&rb->nest, 0); | ||
103 | |||
104 | /* | ||
105 | * Ensure we decrement @rb->nest before we validate the @rb->head. | ||
106 | * Otherwise we cannot be sure we caught the 'last' nested update. | ||
92 | */ | 107 | */ |
108 | barrier(); | ||
93 | if (unlikely(head != local_read(&rb->head))) { | 109 | if (unlikely(head != local_read(&rb->head))) { |
94 | local_inc(&rb->nest); | 110 | local_inc(&rb->nest); |
95 | goto again; | 111 | goto again; |