diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2013-09-03 21:19:21 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-09-03 21:19:21 -0400 |
commit | 9ee52a1633a77961cb7b7fb5bd40be682f8412c7 (patch) | |
tree | 2b45df88a77cca6eaeac414653a852c3905dd514 | |
parent | 96d4e231d25e3d7d8b7a2a9267043eac5d4560a8 (diff) | |
parent | 546d30c4a2e61a53d408e5f40d01278f144bb0f5 (diff) |
Merge branch 'for-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue updates from Tejun Heo:
"Nothing interesting. All are doc / comment updates"
* 'for-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
workqueue: Correct/Drop references to gcwq in Documentation
workqueue: Fix manage_workers() RETURNS description
workqueue: Comment correction in file header
workqueue: mark WQ_NON_REENTRANT deprecated
-rw-r--r-- | Documentation/workqueue.txt | 90 | ||||
-rw-r--r-- | include/linux/workqueue.h | 7 | ||||
-rw-r--r-- | kernel/workqueue.c | 14 |
3 files changed, 57 insertions, 54 deletions
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt index a6ab4b62d926..f81a65b54c29 100644 --- a/Documentation/workqueue.txt +++ b/Documentation/workqueue.txt | |||
@@ -85,32 +85,31 @@ workqueue. | |||
85 | Special purpose threads, called worker threads, execute the functions | 85 | Special purpose threads, called worker threads, execute the functions |
86 | off of the queue, one after the other. If no work is queued, the | 86 | off of the queue, one after the other. If no work is queued, the |
87 | worker threads become idle. These worker threads are managed in so | 87 | worker threads become idle. These worker threads are managed in so |
88 | called thread-pools. | 88 | called worker-pools. |
89 | 89 | ||
90 | The cmwq design differentiates between the user-facing workqueues that | 90 | The cmwq design differentiates between the user-facing workqueues that |
91 | subsystems and drivers queue work items on and the backend mechanism | 91 | subsystems and drivers queue work items on and the backend mechanism |
92 | which manages thread-pools and processes the queued work items. | 92 | which manages worker-pools and processes the queued work items. |
93 | 93 | ||
94 | The backend is called gcwq. There is one gcwq for each possible CPU | 94 | There are two worker-pools, one for normal work items and the other |
95 | and one gcwq to serve work items queued on unbound workqueues. Each | 95 | for high priority ones, for each possible CPU and some extra |
96 | gcwq has two thread-pools - one for normal work items and the other | 96 | worker-pools to serve work items queued on unbound workqueues - the |
97 | for high priority ones. | 97 | number of these backing pools is dynamic. |
98 | 98 | ||
99 | Subsystems and drivers can create and queue work items through special | 99 | Subsystems and drivers can create and queue work items through special |
100 | workqueue API functions as they see fit. They can influence some | 100 | workqueue API functions as they see fit. They can influence some |
101 | aspects of the way the work items are executed by setting flags on the | 101 | aspects of the way the work items are executed by setting flags on the |
102 | workqueue they are putting the work item on. These flags include | 102 | workqueue they are putting the work item on. These flags include |
103 | things like CPU locality, reentrancy, concurrency limits, priority and | 103 | things like CPU locality, concurrency limits, priority and more. To |
104 | more. To get a detailed overview refer to the API description of | 104 | get a detailed overview refer to the API description of |
105 | alloc_workqueue() below. | 105 | alloc_workqueue() below. |
106 | 106 | ||
107 | When a work item is queued to a workqueue, the target gcwq and | 107 | When a work item is queued to a workqueue, the target worker-pool is |
108 | thread-pool is determined according to the queue parameters and | 108 | determined according to the queue parameters and workqueue attributes |
109 | workqueue attributes and appended on the shared worklist of the | 109 | and appended on the shared worklist of the worker-pool. For example, |
110 | thread-pool. For example, unless specifically overridden, a work item | 110 | unless specifically overridden, a work item of a bound workqueue will |
111 | of a bound workqueue will be queued on the worklist of either normal | 111 | be queued on the worklist of either normal or highpri worker-pool that |
112 | or highpri thread-pool of the gcwq that is associated to the CPU the | 112 | is associated to the CPU the issuer is running on. |
113 | issuer is running on. | ||
114 | 113 | ||
115 | For any worker pool implementation, managing the concurrency level | 114 | For any worker pool implementation, managing the concurrency level |
116 | (how many execution contexts are active) is an important issue. cmwq | 115 | (how many execution contexts are active) is an important issue. cmwq |
@@ -118,14 +117,14 @@ tries to keep the concurrency at a minimal but sufficient level. | |||
118 | Minimal to save resources and sufficient in that the system is used at | 117 | Minimal to save resources and sufficient in that the system is used at |
119 | its full capacity. | 118 | its full capacity. |
120 | 119 | ||
121 | Each thread-pool bound to an actual CPU implements concurrency | 120 | Each worker-pool bound to an actual CPU implements concurrency |
122 | management by hooking into the scheduler. The thread-pool is notified | 121 | management by hooking into the scheduler. The worker-pool is notified |
123 | whenever an active worker wakes up or sleeps and keeps track of the | 122 | whenever an active worker wakes up or sleeps and keeps track of the |
124 | number of the currently runnable workers. Generally, work items are | 123 | number of the currently runnable workers. Generally, work items are |
125 | not expected to hog a CPU and consume many cycles. That means | 124 | not expected to hog a CPU and consume many cycles. That means |
126 | maintaining just enough concurrency to prevent work processing from | 125 | maintaining just enough concurrency to prevent work processing from |
127 | stalling should be optimal. As long as there are one or more runnable | 126 | stalling should be optimal. As long as there are one or more runnable |
128 | workers on the CPU, the thread-pool doesn't start execution of a new | 127 | workers on the CPU, the worker-pool doesn't start execution of a new |
129 | work, but, when the last running worker goes to sleep, it immediately | 128 | work, but, when the last running worker goes to sleep, it immediately |
130 | schedules a new worker so that the CPU doesn't sit idle while there | 129 | schedules a new worker so that the CPU doesn't sit idle while there |
131 | are pending work items. This allows using a minimal number of workers | 130 | are pending work items. This allows using a minimal number of workers |
@@ -135,19 +134,20 @@ Keeping idle workers around doesn't cost other than the memory space | |||
135 | for kthreads, so cmwq holds onto idle ones for a while before killing | 134 | for kthreads, so cmwq holds onto idle ones for a while before killing |
136 | them. | 135 | them. |
137 | 136 | ||
138 | For an unbound wq, the above concurrency management doesn't apply and | 137 | For unbound workqueues, the number of backing pools is dynamic. |
139 | the thread-pools for the pseudo unbound CPU try to start executing all | 138 | Unbound workqueue can be assigned custom attributes using |
140 | work items as soon as possible. The responsibility of regulating | 139 | apply_workqueue_attrs() and workqueue will automatically create |
141 | concurrency level is on the users. There is also a flag to mark a | 140 | backing worker pools matching the attributes. The responsibility of |
142 | bound wq to ignore the concurrency management. Please refer to the | 141 | regulating concurrency level is on the users. There is also a flag to |
143 | API section for details. | 142 | mark a bound wq to ignore the concurrency management. Please refer to |
143 | the API section for details. | ||
144 | 144 | ||
145 | Forward progress guarantee relies on that workers can be created when | 145 | Forward progress guarantee relies on that workers can be created when |
146 | more execution contexts are necessary, which in turn is guaranteed | 146 | more execution contexts are necessary, which in turn is guaranteed |
147 | through the use of rescue workers. All work items which might be used | 147 | through the use of rescue workers. All work items which might be used |
148 | on code paths that handle memory reclaim are required to be queued on | 148 | on code paths that handle memory reclaim are required to be queued on |
149 | wq's that have a rescue-worker reserved for execution under memory | 149 | wq's that have a rescue-worker reserved for execution under memory |
150 | pressure. Else it is possible that the thread-pool deadlocks waiting | 150 | pressure. Else it is possible that the worker-pool deadlocks waiting |
151 | for execution contexts to free up. | 151 | for execution contexts to free up. |
152 | 152 | ||
153 | 153 | ||
@@ -166,25 +166,15 @@ resources, scheduled and executed. | |||
166 | 166 | ||
167 | @flags: | 167 | @flags: |
168 | 168 | ||
169 | WQ_NON_REENTRANT | ||
170 | |||
171 | By default, a wq guarantees non-reentrance only on the same | ||
172 | CPU. A work item may not be executed concurrently on the same | ||
173 | CPU by multiple workers but is allowed to be executed | ||
174 | concurrently on multiple CPUs. This flag makes sure | ||
175 | non-reentrance is enforced across all CPUs. Work items queued | ||
176 | to a non-reentrant wq are guaranteed to be executed by at most | ||
177 | one worker system-wide at any given time. | ||
178 | |||
179 | WQ_UNBOUND | 169 | WQ_UNBOUND |
180 | 170 | ||
181 | Work items queued to an unbound wq are served by a special | 171 | Work items queued to an unbound wq are served by the special |
182 | gcwq which hosts workers which are not bound to any specific | 172 | woker-pools which host workers which are not bound to any |
183 | CPU. This makes the wq behave as a simple execution context | 173 | specific CPU. This makes the wq behave as a simple execution |
184 | provider without concurrency management. The unbound gcwq | 174 | context provider without concurrency management. The unbound |
185 | tries to start execution of work items as soon as possible. | 175 | worker-pools try to start execution of work items as soon as |
186 | Unbound wq sacrifices locality but is useful for the following | 176 | possible. Unbound wq sacrifices locality but is useful for |
187 | cases. | 177 | the following cases. |
188 | 178 | ||
189 | * Wide fluctuation in the concurrency level requirement is | 179 | * Wide fluctuation in the concurrency level requirement is |
190 | expected and using bound wq may end up creating large number | 180 | expected and using bound wq may end up creating large number |
@@ -209,10 +199,10 @@ resources, scheduled and executed. | |||
209 | WQ_HIGHPRI | 199 | WQ_HIGHPRI |
210 | 200 | ||
211 | Work items of a highpri wq are queued to the highpri | 201 | Work items of a highpri wq are queued to the highpri |
212 | thread-pool of the target gcwq. Highpri thread-pools are | 202 | worker-pool of the target cpu. Highpri worker-pools are |
213 | served by worker threads with elevated nice level. | 203 | served by worker threads with elevated nice level. |
214 | 204 | ||
215 | Note that normal and highpri thread-pools don't interact with | 205 | Note that normal and highpri worker-pools don't interact with |
216 | each other. Each maintain its separate pool of workers and | 206 | each other. Each maintain its separate pool of workers and |
217 | implements concurrency management among its workers. | 207 | implements concurrency management among its workers. |
218 | 208 | ||
@@ -221,7 +211,7 @@ resources, scheduled and executed. | |||
221 | Work items of a CPU intensive wq do not contribute to the | 211 | Work items of a CPU intensive wq do not contribute to the |
222 | concurrency level. In other words, runnable CPU intensive | 212 | concurrency level. In other words, runnable CPU intensive |
223 | work items will not prevent other work items in the same | 213 | work items will not prevent other work items in the same |
224 | thread-pool from starting execution. This is useful for bound | 214 | worker-pool from starting execution. This is useful for bound |
225 | work items which are expected to hog CPU cycles so that their | 215 | work items which are expected to hog CPU cycles so that their |
226 | execution is regulated by the system scheduler. | 216 | execution is regulated by the system scheduler. |
227 | 217 | ||
@@ -233,6 +223,10 @@ resources, scheduled and executed. | |||
233 | 223 | ||
234 | This flag is meaningless for unbound wq. | 224 | This flag is meaningless for unbound wq. |
235 | 225 | ||
226 | Note that the flag WQ_NON_REENTRANT no longer exists as all workqueues | ||
227 | are now non-reentrant - any work item is guaranteed to be executed by | ||
228 | at most one worker system-wide at any given time. | ||
229 | |||
236 | @max_active: | 230 | @max_active: |
237 | 231 | ||
238 | @max_active determines the maximum number of execution contexts per | 232 | @max_active determines the maximum number of execution contexts per |
@@ -254,9 +248,9 @@ recommended. | |||
254 | 248 | ||
255 | Some users depend on the strict execution ordering of ST wq. The | 249 | Some users depend on the strict execution ordering of ST wq. The |
256 | combination of @max_active of 1 and WQ_UNBOUND is used to achieve this | 250 | combination of @max_active of 1 and WQ_UNBOUND is used to achieve this |
257 | behavior. Work items on such wq are always queued to the unbound gcwq | 251 | behavior. Work items on such wq are always queued to the unbound |
258 | and only one work item can be active at any given time thus achieving | 252 | worker-pools and only one work item can be active at any given time thus |
259 | the same ordering property as ST wq. | 253 | achieving the same ordering property as ST wq. |
260 | 254 | ||
261 | 255 | ||
262 | 5. Example Execution Scenarios | 256 | 5. Example Execution Scenarios |
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index a0ed78ab54d7..594521ba0d43 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h | |||
@@ -295,7 +295,12 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; } | |||
295 | * Documentation/workqueue.txt. | 295 | * Documentation/workqueue.txt. |
296 | */ | 296 | */ |
297 | enum { | 297 | enum { |
298 | WQ_NON_REENTRANT = 1 << 0, /* guarantee non-reentrance */ | 298 | /* |
299 | * All wqs are now non-reentrant making the following flag | ||
300 | * meaningless. Will be removed. | ||
301 | */ | ||
302 | WQ_NON_REENTRANT = 1 << 0, /* DEPRECATED */ | ||
303 | |||
299 | WQ_UNBOUND = 1 << 1, /* not bound to any cpu */ | 304 | WQ_UNBOUND = 1 << 1, /* not bound to any cpu */ |
300 | WQ_FREEZABLE = 1 << 2, /* freeze during suspend */ | 305 | WQ_FREEZABLE = 1 << 2, /* freeze during suspend */ |
301 | WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */ | 306 | WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */ |
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 5f8ee91abdff..29b79852a845 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c | |||
@@ -16,9 +16,10 @@ | |||
16 | * | 16 | * |
17 | * This is the generic async execution mechanism. Work items as are | 17 | * This is the generic async execution mechanism. Work items as are |
18 | * executed in process context. The worker pool is shared and | 18 | * executed in process context. The worker pool is shared and |
19 | * automatically managed. There is one worker pool for each CPU and | 19 | * automatically managed. There are two worker pools for each CPU (one for |
20 | * one extra for works which are better served by workers which are | 20 | * normal work items and the other for high priority ones) and some extra |
21 | * not bound to any specific CPU. | 21 | * pools for workqueues which are not bound to any specific CPU - the |
22 | * number of these backing pools is dynamic. | ||
22 | * | 23 | * |
23 | * Please read Documentation/workqueue.txt for details. | 24 | * Please read Documentation/workqueue.txt for details. |
24 | */ | 25 | */ |
@@ -2033,8 +2034,11 @@ static bool maybe_destroy_workers(struct worker_pool *pool) | |||
2033 | * multiple times. Does GFP_KERNEL allocations. | 2034 | * multiple times. Does GFP_KERNEL allocations. |
2034 | * | 2035 | * |
2035 | * RETURNS: | 2036 | * RETURNS: |
2036 | * spin_lock_irq(pool->lock) which may be released and regrabbed | 2037 | * %false if the pool don't need management and the caller can safely start |
2037 | * multiple times. Does GFP_KERNEL allocations. | 2038 | * processing works, %true indicates that the function released pool->lock |
2039 | * and reacquired it to perform some management function and that the | ||
2040 | * conditions that the caller verified while holding the lock before | ||
2041 | * calling the function might no longer be true. | ||
2038 | */ | 2042 | */ |
2039 | static bool manage_workers(struct worker *worker) | 2043 | static bool manage_workers(struct worker *worker) |
2040 | { | 2044 | { |