aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>2013-11-02 13:17:52 -0400
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2013-12-03 13:08:56 -0500
commit9873552fc1b01ef9bddc9fec4c492d9fa8b27f51 (patch)
treee4ea4d9a9f2ce20f686c0aa1900623b0471936a6
parent97e63f0caf0e977a1648b50943909c87a3f683d3 (diff)
documentation: Fix circular-buffer example.
The code sample in Documentation/circular-buffers.txt appears to have a few ordering bugs. This patch therefore applies the needed fixes. Reported-by: Lech Fomicki <lfomicki@poczta.fm> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-rw-r--r--Documentation/circular-buffers.txt22
1 files changed, 15 insertions, 7 deletions
diff --git a/Documentation/circular-buffers.txt b/Documentation/circular-buffers.txt
index 8117e5bf6065..a36bed3db4ee 100644
--- a/Documentation/circular-buffers.txt
+++ b/Documentation/circular-buffers.txt
@@ -170,7 +170,7 @@ The producer will look something like this:
170 170
171 smp_wmb(); /* commit the item before incrementing the head */ 171 smp_wmb(); /* commit the item before incrementing the head */
172 172
173 buffer->head = (head + 1) & (buffer->size - 1); 173 ACCESS_ONCE(buffer->head) = (head + 1) & (buffer->size - 1);
174 174
175 /* wake_up() will make sure that the head is committed before 175 /* wake_up() will make sure that the head is committed before
176 * waking anyone up */ 176 * waking anyone up */
@@ -183,9 +183,14 @@ This will instruct the CPU that the contents of the new item must be written
183before the head index makes it available to the consumer and then instructs the 183before the head index makes it available to the consumer and then instructs the
184CPU that the revised head index must be written before the consumer is woken. 184CPU that the revised head index must be written before the consumer is woken.
185 185
186Note that wake_up() doesn't have to be the exact mechanism used, but whatever 186Note that wake_up() does not guarantee any sort of barrier unless something
187is used must guarantee a (write) memory barrier between the update of the head 187is actually awakened. We therefore cannot rely on it for ordering. However,
188index and the change of state of the consumer, if a change of state occurs. 188there is always one element of the array left empty. Therefore, the
189producer must produce two elements before it could possibly corrupt the
190element currently being read by the consumer. Therefore, the unlock-lock
191pair between consecutive invocations of the consumer provides the necessary
192ordering between the read of the index indicating that the consumer has
193vacated a given element and the write by the producer to that same element.
189 194
190 195
191THE CONSUMER 196THE CONSUMER
@@ -200,7 +205,7 @@ The consumer will look something like this:
200 205
201 if (CIRC_CNT(head, tail, buffer->size) >= 1) { 206 if (CIRC_CNT(head, tail, buffer->size) >= 1) {
202 /* read index before reading contents at that index */ 207 /* read index before reading contents at that index */
203 smp_read_barrier_depends(); 208 smp_rmb();
204 209
205 /* extract one item from the buffer */ 210 /* extract one item from the buffer */
206 struct item *item = buffer[tail]; 211 struct item *item = buffer[tail];
@@ -209,7 +214,7 @@ The consumer will look something like this:
209 214
210 smp_mb(); /* finish reading descriptor before incrementing tail */ 215 smp_mb(); /* finish reading descriptor before incrementing tail */
211 216
212 buffer->tail = (tail + 1) & (buffer->size - 1); 217 ACCESS_ONCE(buffer->tail) = (tail + 1) & (buffer->size - 1);
213 } 218 }
214 219
215 spin_unlock(&consumer_lock); 220 spin_unlock(&consumer_lock);
@@ -223,7 +228,10 @@ Note the use of ACCESS_ONCE() in both algorithms to read the opposition index.
223This prevents the compiler from discarding and reloading its cached value - 228This prevents the compiler from discarding and reloading its cached value -
224which some compilers will do across smp_read_barrier_depends(). This isn't 229which some compilers will do across smp_read_barrier_depends(). This isn't
225strictly needed if you can be sure that the opposition index will _only_ be 230strictly needed if you can be sure that the opposition index will _only_ be
226used the once. 231used the once. Similarly, ACCESS_ONCE() is used in both algorithms to
232write the thread's index. This documents the fact that we are writing
233to something that can be read concurrently and also prevents the compiler
234from tearing the store.
227 235
228 236
229=============== 237===============