aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/RCU/checklist.txt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/RCU/checklist.txt')
-rw-r--r--Documentation/RCU/checklist.txt47
1 files changed, 34 insertions, 13 deletions
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index b3a568abe6b1..8f3fb77c9cd3 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -32,7 +32,10 @@ over a rather long period of time, but improvements are always welcome!
32 them -- even x86 allows reads to be reordered), and be prepared 32 them -- even x86 allows reads to be reordered), and be prepared
33 to explain why this added complexity is worthwhile. If you 33 to explain why this added complexity is worthwhile. If you
34 choose #c, be prepared to explain how this single task does not 34 choose #c, be prepared to explain how this single task does not
35 become a major bottleneck on big multiprocessor machines. 35 become a major bottleneck on big multiprocessor machines (for
36 example, if the task is updating information relating to itself
37 that other tasks can read, there by definition can be no
38 bottleneck).
36 39
372. Do the RCU read-side critical sections make proper use of 402. Do the RCU read-side critical sections make proper use of
38 rcu_read_lock() and friends? These primitives are needed 41 rcu_read_lock() and friends? These primitives are needed
@@ -89,27 +92,34 @@ over a rather long period of time, but improvements are always welcome!
89 "_rcu()" list-traversal primitives, such as the 92 "_rcu()" list-traversal primitives, such as the
90 list_for_each_entry_rcu(). 93 list_for_each_entry_rcu().
91 94
92 b. If the list macros are being used, the list_del_rcu(), 95 b. If the list macros are being used, the list_add_tail_rcu()
93 list_add_tail_rcu(), and list_del_rcu() primitives must 96 and list_add_rcu() primitives must be used in order
94 be used in order to prevent weakly ordered machines from 97 to prevent weakly ordered machines from misordering
95 misordering structure initialization and pointer planting. 98 structure initialization and pointer planting.
96 Similarly, if the hlist macros are being used, the 99 Similarly, if the hlist macros are being used, the
97 hlist_del_rcu() and hlist_add_head_rcu() primitives 100 hlist_add_head_rcu() primitive is required.
98 are required.
99 101
100 c. Updates must ensure that initialization of a given 102 c. If the list macros are being used, the list_del_rcu()
103 primitive must be used to keep list_del()'s pointer
104 poisoning from inflicting toxic effects on concurrent
105 readers. Similarly, if the hlist macros are being used,
106 the hlist_del_rcu() primitive is required.
107
108 The list_replace_rcu() primitive may be used to
109 replace an old structure with a new one in an
110 RCU-protected list.
111
112 d. Updates must ensure that initialization of a given
101 structure happens before pointers to that structure are 113 structure happens before pointers to that structure are
102 publicized. Use the rcu_assign_pointer() primitive 114 publicized. Use the rcu_assign_pointer() primitive
103 when publicizing a pointer to a structure that can 115 when publicizing a pointer to a structure that can
104 be traversed by an RCU read-side critical section. 116 be traversed by an RCU read-side critical section.
105 117
106 [The rcu_assign_pointer() primitive is in process.]
107
1085. If call_rcu(), or a related primitive such as call_rcu_bh(), 1185. If call_rcu(), or a related primitive such as call_rcu_bh(),
109 is used, the callback function must be written to be called 119 is used, the callback function must be written to be called
110 from softirq context. In particular, it cannot block. 120 from softirq context. In particular, it cannot block.
111 121
1126. Since synchronize_kernel() blocks, it cannot be called from 1226. Since synchronize_rcu() can block, it cannot be called from
113 any sort of irq context. 123 any sort of irq context.
114 124
1157. If the updater uses call_rcu(), then the corresponding readers 1257. If the updater uses call_rcu(), then the corresponding readers
@@ -125,9 +135,9 @@ over a rather long period of time, but improvements are always welcome!
125 such cases is a must, of course! And the jury is still out on 135 such cases is a must, of course! And the jury is still out on
126 whether the increased speed is worth it. 136 whether the increased speed is worth it.
127 137
1288. Although synchronize_kernel() is a bit slower than is call_rcu(), 1388. Although synchronize_rcu() is a bit slower than is call_rcu(),
129 it usually results in simpler code. So, unless update performance 139 it usually results in simpler code. So, unless update performance
130 is important or the updaters cannot block, synchronize_kernel() 140 is important or the updaters cannot block, synchronize_rcu()
131 should be used in preference to call_rcu(). 141 should be used in preference to call_rcu().
132 142
1339. All RCU list-traversal primitives, which include 1439. All RCU list-traversal primitives, which include
@@ -155,3 +165,14 @@ over a rather long period of time, but improvements are always welcome!
155 you -must- use the "_rcu()" variants of the list macros. 165 you -must- use the "_rcu()" variants of the list macros.
156 Failing to do so will break Alpha and confuse people reading 166 Failing to do so will break Alpha and confuse people reading
157 your code. 167 your code.
168
16911. Note that synchronize_rcu() -only- guarantees to wait until
170 all currently executing rcu_read_lock()-protected RCU read-side
171 critical sections complete. It does -not- necessarily guarantee
172 that all currently running interrupts, NMIs, preempt_disable()
173 code, or idle loops will complete. Therefore, if you do not have
174 rcu_read_lock()-protected read-side critical sections, do -not-
175 use synchronize_rcu().
176
177 If you want to wait for some of these other things, you might
178 instead need to use synchronize_irq() or synchronize_sched().