aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-09-03 21:19:21 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-09-03 21:19:21 -0400
commit9ee52a1633a77961cb7b7fb5bd40be682f8412c7 (patch)
tree2b45df88a77cca6eaeac414653a852c3905dd514
parent96d4e231d25e3d7d8b7a2a9267043eac5d4560a8 (diff)
parent546d30c4a2e61a53d408e5f40d01278f144bb0f5 (diff)
Merge branch 'for-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue updates from Tejun Heo: "Nothing interesting. All are doc / comment updates" * 'for-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: Correct/Drop references to gcwq in Documentation workqueue: Fix manage_workers() RETURNS description workqueue: Comment correction in file header workqueue: mark WQ_NON_REENTRANT deprecated
-rw-r--r--Documentation/workqueue.txt90
-rw-r--r--include/linux/workqueue.h7
-rw-r--r--kernel/workqueue.c14
3 files changed, 57 insertions, 54 deletions
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a6ab4b62d926..f81a65b54c29 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -85,32 +85,31 @@ workqueue.
85Special purpose threads, called worker threads, execute the functions 85Special purpose threads, called worker threads, execute the functions
86off of the queue, one after the other. If no work is queued, the 86off of the queue, one after the other. If no work is queued, the
87worker threads become idle. These worker threads are managed in so 87worker threads become idle. These worker threads are managed in so
88called thread-pools. 88called worker-pools.
89 89
90The cmwq design differentiates between the user-facing workqueues that 90The cmwq design differentiates between the user-facing workqueues that
91subsystems and drivers queue work items on and the backend mechanism 91subsystems and drivers queue work items on and the backend mechanism
92which manages thread-pools and processes the queued work items. 92which manages worker-pools and processes the queued work items.
93 93
94The backend is called gcwq. There is one gcwq for each possible CPU 94There are two worker-pools, one for normal work items and the other
95and one gcwq to serve work items queued on unbound workqueues. Each 95for high priority ones, for each possible CPU and some extra
96gcwq has two thread-pools - one for normal work items and the other 96worker-pools to serve work items queued on unbound workqueues - the
97for high priority ones. 97number of these backing pools is dynamic.
98 98
99Subsystems and drivers can create and queue work items through special 99Subsystems and drivers can create and queue work items through special
100workqueue API functions as they see fit. They can influence some 100workqueue API functions as they see fit. They can influence some
101aspects of the way the work items are executed by setting flags on the 101aspects of the way the work items are executed by setting flags on the
102workqueue they are putting the work item on. These flags include 102workqueue they are putting the work item on. These flags include
103things like CPU locality, reentrancy, concurrency limits, priority and 103things like CPU locality, concurrency limits, priority and more. To
104more. To get a detailed overview refer to the API description of 104get a detailed overview refer to the API description of
105alloc_workqueue() below. 105alloc_workqueue() below.
106 106
107When a work item is queued to a workqueue, the target gcwq and 107When a work item is queued to a workqueue, the target worker-pool is
108thread-pool is determined according to the queue parameters and 108determined according to the queue parameters and workqueue attributes
109workqueue attributes and appended on the shared worklist of the 109and appended on the shared worklist of the worker-pool. For example,
110thread-pool. For example, unless specifically overridden, a work item 110unless specifically overridden, a work item of a bound workqueue will
111of a bound workqueue will be queued on the worklist of either normal 111be queued on the worklist of either normal or highpri worker-pool that
112or highpri thread-pool of the gcwq that is associated to the CPU the 112is associated to the CPU the issuer is running on.
113issuer is running on.
114 113
115For any worker pool implementation, managing the concurrency level 114For any worker pool implementation, managing the concurrency level
116(how many execution contexts are active) is an important issue. cmwq 115(how many execution contexts are active) is an important issue. cmwq
@@ -118,14 +117,14 @@ tries to keep the concurrency at a minimal but sufficient level.
118Minimal to save resources and sufficient in that the system is used at 117Minimal to save resources and sufficient in that the system is used at
119its full capacity. 118its full capacity.
120 119
121Each thread-pool bound to an actual CPU implements concurrency 120Each worker-pool bound to an actual CPU implements concurrency
122management by hooking into the scheduler. The thread-pool is notified 121management by hooking into the scheduler. The worker-pool is notified
123whenever an active worker wakes up or sleeps and keeps track of the 122whenever an active worker wakes up or sleeps and keeps track of the
124number of the currently runnable workers. Generally, work items are 123number of the currently runnable workers. Generally, work items are
125not expected to hog a CPU and consume many cycles. That means 124not expected to hog a CPU and consume many cycles. That means
126maintaining just enough concurrency to prevent work processing from 125maintaining just enough concurrency to prevent work processing from
127stalling should be optimal. As long as there are one or more runnable 126stalling should be optimal. As long as there are one or more runnable
128workers on the CPU, the thread-pool doesn't start execution of a new 127workers on the CPU, the worker-pool doesn't start execution of a new
129work, but, when the last running worker goes to sleep, it immediately 128work, but, when the last running worker goes to sleep, it immediately
130schedules a new worker so that the CPU doesn't sit idle while there 129schedules a new worker so that the CPU doesn't sit idle while there
131are pending work items. This allows using a minimal number of workers 130are pending work items. This allows using a minimal number of workers
@@ -135,19 +134,20 @@ Keeping idle workers around doesn't cost other than the memory space
135for kthreads, so cmwq holds onto idle ones for a while before killing 134for kthreads, so cmwq holds onto idle ones for a while before killing
136them. 135them.
137 136
138For an unbound wq, the above concurrency management doesn't apply and 137For unbound workqueues, the number of backing pools is dynamic.
139the thread-pools for the pseudo unbound CPU try to start executing all 138Unbound workqueue can be assigned custom attributes using
140work items as soon as possible. The responsibility of regulating 139apply_workqueue_attrs() and workqueue will automatically create
141concurrency level is on the users. There is also a flag to mark a 140backing worker pools matching the attributes. The responsibility of
142bound wq to ignore the concurrency management. Please refer to the 141regulating concurrency level is on the users. There is also a flag to
143API section for details. 142mark a bound wq to ignore the concurrency management. Please refer to
143the API section for details.
144 144
145Forward progress guarantee relies on that workers can be created when 145Forward progress guarantee relies on that workers can be created when
146more execution contexts are necessary, which in turn is guaranteed 146more execution contexts are necessary, which in turn is guaranteed
147through the use of rescue workers. All work items which might be used 147through the use of rescue workers. All work items which might be used
148on code paths that handle memory reclaim are required to be queued on 148on code paths that handle memory reclaim are required to be queued on
149wq's that have a rescue-worker reserved for execution under memory 149wq's that have a rescue-worker reserved for execution under memory
150pressure. Else it is possible that the thread-pool deadlocks waiting 150pressure. Else it is possible that the worker-pool deadlocks waiting
151for execution contexts to free up. 151for execution contexts to free up.
152 152
153 153
@@ -166,25 +166,15 @@ resources, scheduled and executed.
166 166
167@flags: 167@flags:
168 168
169 WQ_NON_REENTRANT
170
171 By default, a wq guarantees non-reentrance only on the same
172 CPU. A work item may not be executed concurrently on the same
173 CPU by multiple workers but is allowed to be executed
174 concurrently on multiple CPUs. This flag makes sure
175 non-reentrance is enforced across all CPUs. Work items queued
176 to a non-reentrant wq are guaranteed to be executed by at most
177 one worker system-wide at any given time.
178
179 WQ_UNBOUND 169 WQ_UNBOUND
180 170
181 Work items queued to an unbound wq are served by a special 171 Work items queued to an unbound wq are served by the special
182 gcwq which hosts workers which are not bound to any specific 172 woker-pools which host workers which are not bound to any
183 CPU. This makes the wq behave as a simple execution context 173 specific CPU. This makes the wq behave as a simple execution
184 provider without concurrency management. The unbound gcwq 174 context provider without concurrency management. The unbound
185 tries to start execution of work items as soon as possible. 175 worker-pools try to start execution of work items as soon as
186 Unbound wq sacrifices locality but is useful for the following 176 possible. Unbound wq sacrifices locality but is useful for
187 cases. 177 the following cases.
188 178
189 * Wide fluctuation in the concurrency level requirement is 179 * Wide fluctuation in the concurrency level requirement is
190 expected and using bound wq may end up creating large number 180 expected and using bound wq may end up creating large number
@@ -209,10 +199,10 @@ resources, scheduled and executed.
209 WQ_HIGHPRI 199 WQ_HIGHPRI
210 200
211 Work items of a highpri wq are queued to the highpri 201 Work items of a highpri wq are queued to the highpri
212 thread-pool of the target gcwq. Highpri thread-pools are 202 worker-pool of the target cpu. Highpri worker-pools are
213 served by worker threads with elevated nice level. 203 served by worker threads with elevated nice level.
214 204
215 Note that normal and highpri thread-pools don't interact with 205 Note that normal and highpri worker-pools don't interact with
216 each other. Each maintain its separate pool of workers and 206 each other. Each maintain its separate pool of workers and
217 implements concurrency management among its workers. 207 implements concurrency management among its workers.
218 208
@@ -221,7 +211,7 @@ resources, scheduled and executed.
221 Work items of a CPU intensive wq do not contribute to the 211 Work items of a CPU intensive wq do not contribute to the
222 concurrency level. In other words, runnable CPU intensive 212 concurrency level. In other words, runnable CPU intensive
223 work items will not prevent other work items in the same 213 work items will not prevent other work items in the same
224 thread-pool from starting execution. This is useful for bound 214 worker-pool from starting execution. This is useful for bound
225 work items which are expected to hog CPU cycles so that their 215 work items which are expected to hog CPU cycles so that their
226 execution is regulated by the system scheduler. 216 execution is regulated by the system scheduler.
227 217
@@ -233,6 +223,10 @@ resources, scheduled and executed.
233 223
234 This flag is meaningless for unbound wq. 224 This flag is meaningless for unbound wq.
235 225
226Note that the flag WQ_NON_REENTRANT no longer exists as all workqueues
227are now non-reentrant - any work item is guaranteed to be executed by
228at most one worker system-wide at any given time.
229
236@max_active: 230@max_active:
237 231
238@max_active determines the maximum number of execution contexts per 232@max_active determines the maximum number of execution contexts per
@@ -254,9 +248,9 @@ recommended.
254 248
255Some users depend on the strict execution ordering of ST wq. The 249Some users depend on the strict execution ordering of ST wq. The
256combination of @max_active of 1 and WQ_UNBOUND is used to achieve this 250combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
257behavior. Work items on such wq are always queued to the unbound gcwq 251behavior. Work items on such wq are always queued to the unbound
258and only one work item can be active at any given time thus achieving 252worker-pools and only one work item can be active at any given time thus
259the same ordering property as ST wq. 253achieving the same ordering property as ST wq.
260 254
261 255
2625. Example Execution Scenarios 2565. Example Execution Scenarios
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0ed78ab54d7..594521ba0d43 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -295,7 +295,12 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
295 * Documentation/workqueue.txt. 295 * Documentation/workqueue.txt.
296 */ 296 */
297enum { 297enum {
298 WQ_NON_REENTRANT = 1 << 0, /* guarantee non-reentrance */ 298 /*
299 * All wqs are now non-reentrant making the following flag
300 * meaningless. Will be removed.
301 */
302 WQ_NON_REENTRANT = 1 << 0, /* DEPRECATED */
303
299 WQ_UNBOUND = 1 << 1, /* not bound to any cpu */ 304 WQ_UNBOUND = 1 << 1, /* not bound to any cpu */
300 WQ_FREEZABLE = 1 << 2, /* freeze during suspend */ 305 WQ_FREEZABLE = 1 << 2, /* freeze during suspend */
301 WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */ 306 WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5f8ee91abdff..29b79852a845 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -16,9 +16,10 @@
16 * 16 *
17 * This is the generic async execution mechanism. Work items as are 17 * This is the generic async execution mechanism. Work items as are
18 * executed in process context. The worker pool is shared and 18 * executed in process context. The worker pool is shared and
19 * automatically managed. There is one worker pool for each CPU and 19 * automatically managed. There are two worker pools for each CPU (one for
20 * one extra for works which are better served by workers which are 20 * normal work items and the other for high priority ones) and some extra
21 * not bound to any specific CPU. 21 * pools for workqueues which are not bound to any specific CPU - the
22 * number of these backing pools is dynamic.
22 * 23 *
23 * Please read Documentation/workqueue.txt for details. 24 * Please read Documentation/workqueue.txt for details.
24 */ 25 */
@@ -2033,8 +2034,11 @@ static bool maybe_destroy_workers(struct worker_pool *pool)
2033 * multiple times. Does GFP_KERNEL allocations. 2034 * multiple times. Does GFP_KERNEL allocations.
2034 * 2035 *
2035 * RETURNS: 2036 * RETURNS:
2036 * spin_lock_irq(pool->lock) which may be released and regrabbed 2037 * %false if the pool don't need management and the caller can safely start
2037 * multiple times. Does GFP_KERNEL allocations. 2038 * processing works, %true indicates that the function released pool->lock
2039 * and reacquired it to perform some management function and that the
2040 * conditions that the caller verified while holding the lock before
2041 * calling the function might no longer be true.
2038 */ 2042 */
2039static bool manage_workers(struct worker *worker) 2043static bool manage_workers(struct worker *worker)
2040{ 2044{