diff options
author | Jonathan Corbet <corbet@lwn.net> | 2015-03-27 12:16:35 -0400 |
---|---|---|
committer | Jonathan Corbet <corbet@lwn.net> | 2015-04-04 09:20:26 -0400 |
commit | 7085f6c354e1d0b1cc6efafc1389dc63f8b0699a (patch) | |
tree | db5ccc72b950e796588bed7fd9ded3245be11c85 | |
parent | 4988aaa6e508614e5d4c4f08723635fc8191188b (diff) |
docs/completion.txt: Various tweaks and corrections
Mostly language improvements to the new completions.txt document, but there
is also a semantic correction in the description of completion_done() at
the very end.
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
-rw-r--r-- | Documentation/scheduler/completion.txt | 59 |
1 files changed, 30 insertions, 29 deletions
diff --git a/Documentation/scheduler/completion.txt b/Documentation/scheduler/completion.txt index 083d9c931b8d..2622bc7a188b 100644 --- a/Documentation/scheduler/completion.txt +++ b/Documentation/scheduler/completion.txt | |||
@@ -7,21 +7,21 @@ Introduction: | |||
7 | ------------- | 7 | ------------- |
8 | 8 | ||
9 | If you have one or more threads of execution that must wait for some process | 9 | If you have one or more threads of execution that must wait for some process |
10 | to have reached a point or a specific state, completions can provide a race | 10 | to have reached a point or a specific state, completions can provide a |
11 | free solution to this problem. Semantically they are somewhat like a | 11 | race-free solution to this problem. Semantically they are somewhat like a |
12 | pthread_barriers and have similar use-cases. | 12 | pthread_barrier and have similar use-cases. |
13 | 13 | ||
14 | Completions are a code synchronization mechanism which are preferable to any | 14 | Completions are a code synchronization mechanism which is preferable to any |
15 | misuse of locks. Any time you think of using yield() or some quirky | 15 | misuse of locks. Any time you think of using yield() or some quirky |
16 | msleep(1); loop to allow something else to proceed, you probably want to | 16 | msleep(1) loop to allow something else to proceed, you probably want to |
17 | look into using one of the wait_for_completion*() calls instead. The | 17 | look into using one of the wait_for_completion*() calls instead. The |
18 | advantage of using completions is clear intent of the code, but also more | 18 | advantage of using completions is clear intent of the code, but also more |
19 | efficient code as both threads can continue until the result is actually | 19 | efficient code as both threads can continue until the result is actually |
20 | needed. | 20 | needed. |
21 | 21 | ||
22 | Completions are built on top of the generic event infrastructure in Linux, | 22 | Completions are built on top of the generic event infrastructure in Linux, |
23 | with the event reduced to a simple flag appropriately called "done" in | 23 | with the event reduced to a simple flag (appropriately called "done") in |
24 | struct completion, that tells the waiting threads of execution if they | 24 | struct completion that tells the waiting threads of execution if they |
25 | can continue safely. | 25 | can continue safely. |
26 | 26 | ||
27 | As completions are scheduling related, the code is found in | 27 | As completions are scheduling related, the code is found in |
@@ -73,7 +73,7 @@ the default state to "not available", that is, "done" is set to 0. | |||
73 | 73 | ||
74 | The re-initialization function, reinit_completion(), simply resets the | 74 | The re-initialization function, reinit_completion(), simply resets the |
75 | done element to "not available", thus again to 0, without touching the | 75 | done element to "not available", thus again to 0, without touching the |
76 | wait queue. Calling init_completion() on the same completion object is | 76 | wait queue. Calling init_completion() twice on the same completion object is |
77 | most likely a bug as it re-initializes the queue to an empty queue and | 77 | most likely a bug as it re-initializes the queue to an empty queue and |
78 | enqueued tasks could get "lost" - use reinit_completion() in that case. | 78 | enqueued tasks could get "lost" - use reinit_completion() in that case. |
79 | 79 | ||
@@ -106,7 +106,7 @@ For a thread of execution to wait for some concurrent work to finish, it | |||
106 | calls wait_for_completion() on the initialized completion structure. | 106 | calls wait_for_completion() on the initialized completion structure. |
107 | A typical usage scenario is: | 107 | A typical usage scenario is: |
108 | 108 | ||
109 | structure completion setup_done; | 109 | struct completion setup_done; |
110 | init_completion(&setup_done); | 110 | init_completion(&setup_done); |
111 | initialize_work(...,&setup_done,...) | 111 | initialize_work(...,&setup_done,...) |
112 | 112 | ||
@@ -120,16 +120,16 @@ to wait_for_completion() then the waiting side simply will continue | |||
120 | immediately as all dependencies are satisfied if not it will block until | 120 | immediately as all dependencies are satisfied if not it will block until |
121 | completion is signaled by complete(). | 121 | completion is signaled by complete(). |
122 | 122 | ||
123 | Note that wait_for_completion() is calling spin_lock_irq/spin_unlock_irq | 123 | Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(), |
124 | so it can only be called safely when you know that interrupts are enabled. | 124 | so it can only be called safely when you know that interrupts are enabled. |
125 | Calling it from hard-irq or irqs-off atomic contexts will result in hard | 125 | Calling it from hard-irq or irqs-off atomic contexts will result in |
126 | to detect spurious enabling of interrupts. | 126 | hard-to-detect spurious enabling of interrupts. |
127 | 127 | ||
128 | wait_for_completion(): | 128 | wait_for_completion(): |
129 | 129 | ||
130 | void wait_for_completion(struct completion *done): | 130 | void wait_for_completion(struct completion *done): |
131 | 131 | ||
132 | The default behavior is to wait without a timeout and mark the task as | 132 | The default behavior is to wait without a timeout and to mark the task as |
133 | uninterruptible. wait_for_completion() and its variants are only safe | 133 | uninterruptible. wait_for_completion() and its variants are only safe |
134 | in process context (as they can sleep) but not in atomic context, | 134 | in process context (as they can sleep) but not in atomic context, |
135 | interrupt context, with disabled irqs. or preemption is disabled - see also | 135 | interrupt context, with disabled irqs. or preemption is disabled - see also |
@@ -159,28 +159,29 @@ probably not what you want. | |||
159 | int wait_for_completion_interruptible(struct completion *done) | 159 | int wait_for_completion_interruptible(struct completion *done) |
160 | 160 | ||
161 | This function marks the task TASK_INTERRUPTIBLE. If a signal was received | 161 | This function marks the task TASK_INTERRUPTIBLE. If a signal was received |
162 | while waiting it will return -ERESTARTSYS and 0 otherwise. | 162 | while waiting it will return -ERESTARTSYS; 0 otherwise. |
163 | 163 | ||
164 | unsigned long wait_for_completion_timeout(struct completion *done, | 164 | unsigned long wait_for_completion_timeout(struct completion *done, |
165 | unsigned long timeout) | 165 | unsigned long timeout) |
166 | 166 | ||
167 | The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' | 167 | The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' |
168 | (in jiffies). If timeout occurs it returns 0 else the remaining time in | 168 | (in jiffies). If timeout occurs it returns 0 else the remaining time in |
169 | jiffies (but at least 1). Timeouts are preferably passed by msecs_to_jiffies() | 169 | jiffies (but at least 1). Timeouts are preferably calculated with |
170 | or usecs_to_jiffies(). If the returned timeout value is deliberately ignored | 170 | msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is |
171 | a comment should probably explain why (e.g. see drivers/mfd/wm8350-core.c | 171 | deliberately ignored a comment should probably explain why (e.g. see |
172 | wm8350_read_auxadc()) | 172 | drivers/mfd/wm8350-core.c wm8350_read_auxadc()) |
173 | 173 | ||
174 | long wait_for_completion_interruptible_timeout( | 174 | long wait_for_completion_interruptible_timeout( |
175 | struct completion *done, unsigned long timeout) | 175 | struct completion *done, unsigned long timeout) |
176 | 176 | ||
177 | This function passes a timeout in jiffies and marking the task as | 177 | This function passes a timeout in jiffies and marks the task as |
178 | TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS, 0 if | 178 | TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS; |
179 | completion timed out and the remaining time in jiffies if completion occurred. | 179 | otherwise it returns 0 if the completion timed out or the remaining time in |
180 | jiffies if completion occurred. | ||
180 | 181 | ||
181 | Further variants include _killable which passes TASK_KILLABLE as the | 182 | Further variants include _killable which uses TASK_KILLABLE as the |
182 | designated tasks state and will return -ERESTARTSYS if interrupted or | 183 | designated tasks state and will return -ERESTARTSYS if it is interrupted or |
183 | else 0 if completion was achieved as well as a _timeout variant. | 184 | else 0 if completion was achieved. There is a _timeout variant as well: |
184 | 185 | ||
185 | long wait_for_completion_killable(struct completion *done) | 186 | long wait_for_completion_killable(struct completion *done) |
186 | long wait_for_completion_killable_timeout(struct completion *done, | 187 | long wait_for_completion_killable_timeout(struct completion *done, |
@@ -232,14 +233,14 @@ try_wait_for_completion()/completion_done(): | |||
232 | 233 | ||
233 | The try_wait_for_completion() function will not put the thread on the wait | 234 | The try_wait_for_completion() function will not put the thread on the wait |
234 | queue but rather returns false if it would need to enqueue (block) the thread, | 235 | queue but rather returns false if it would need to enqueue (block) the thread, |
235 | else it consumes any posted completions and returns true. | 236 | else it consumes one posted completion and returns true. |
236 | 237 | ||
237 | bool try_wait_for_completion(struct completion *done) | 238 | bool try_wait_for_completion(struct completion *done) |
238 | 239 | ||
239 | Finally to check state of a completion without changing it in any way is | 240 | Finally, to check the state of a completion without changing it in any way, |
240 | provided by completion_done() returning false if there is any posted | 241 | call completion_done(), which returns false if there are no posted |
241 | completion that was not yet consumed by waiters implying that there are | 242 | completions that were not yet consumed by waiters (implying that there are |
242 | waiters and true otherwise; | 243 | waiters) and true otherwise; |
243 | 244 | ||
244 | bool completion_done(struct completion *done) | 245 | bool completion_done(struct completion *done) |
245 | 246 | ||