aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux
diff options
context:
space:
mode:
authorOleg Nesterov <oleg@tv-sign.ru>2007-05-09 05:33:52 -0400
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-05-09 15:30:50 -0400
commitb89deed32ccc96098bd6bc953c64bba6b847774f (patch)
tree7a5963bbc5203cfdb39bf2fb1204764df39c71db /include/linux
parentfc2e4d70410546307344821eed6fd23803a45286 (diff)
implement flush_work()
A basic problem with flush_scheduled_work() is that it blocks behind _all_ presently-queued works, rather than just the work whcih the caller wants to flush. If the caller holds some lock, and if one of the queued work happens to want that lock as well then accidental deadlocks can occur. One example of this is the phy layer: it wants to flush work while holding rtnl_lock(). But if a linkwatch event happens to be queued, the phy code will deadlock because the linkwatch callback function takes rtnl_lock. So we implement a new function which will flush a *single* work - just the one which the caller wants to free up. Thus we avoid the accidental deadlocks which can arise from unrelated subsystems' callbacks taking shared locks. flush_work() non-blockingly dequeues the work_struct which we want to kill, then it waits for its handler to complete on all CPUs. Add ->current_work to the "struct cpu_workqueue_struct", it points to currently running "struct work_struct". When flush_work(work) detects ->current_work == work, it inserts a barrier at the _head_ of ->worklist (and thus right _after_ that work) and waits for completition. This means that the next work fired on that CPU will be this barrier, or another barrier queued by concurrent flush_work(), so the caller of flush_work() will be woken before any "regular" work has a chance to run. When wait_on_work() unlocks workqueue_mutex (or whatever we choose to protect against CPU hotplug), CPU may go away. But in that case take_over_work() will move a barrier we queued to another CPU, it will be fired sometime, and wait_on_work() will be woken. Actually, we are doing cleanup_workqueue_thread()->kthread_stop() before take_over_work(), so cwq->thread should complete its ->worklist (and thus the barrier), because currently we don't check kthread_should_stop() in run_workqueue(). But even if we did, everything should be ok. [akpm@osdl.org: cleanup] [akpm@osdl.org: add flush_work_keventd() wrapper] Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux')
-rw-r--r--include/linux/workqueue.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index f16ba1e0687d..26a70992dec8 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -178,6 +178,8 @@ extern int FASTCALL(queue_delayed_work(struct workqueue_struct *wq, struct delay
178extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq, 178extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
179 struct delayed_work *work, unsigned long delay); 179 struct delayed_work *work, unsigned long delay);
180extern void FASTCALL(flush_workqueue(struct workqueue_struct *wq)); 180extern void FASTCALL(flush_workqueue(struct workqueue_struct *wq));
181extern void flush_work(struct workqueue_struct *wq, struct work_struct *work);
182extern void flush_work_keventd(struct work_struct *work);
181 183
182extern int FASTCALL(schedule_work(struct work_struct *work)); 184extern int FASTCALL(schedule_work(struct work_struct *work));
183extern int FASTCALL(run_scheduled_work(struct work_struct *work)); 185extern int FASTCALL(run_scheduled_work(struct work_struct *work));
@@ -199,7 +201,7 @@ int execute_in_process_context(work_func_t fn, struct execute_work *);
199 * Kill off a pending schedule_delayed_work(). Note that the work callback 201 * Kill off a pending schedule_delayed_work(). Note that the work callback
200 * function may still be running on return from cancel_delayed_work(), unless 202 * function may still be running on return from cancel_delayed_work(), unless
201 * it returns 1 and the work doesn't re-arm itself. Run flush_workqueue() or 203 * it returns 1 and the work doesn't re-arm itself. Run flush_workqueue() or
202 * cancel_work_sync() to wait on it. 204 * flush_work() or cancel_work_sync() to wait on it.
203 */ 205 */
204static inline int cancel_delayed_work(struct delayed_work *work) 206static inline int cancel_delayed_work(struct delayed_work *work)
205{ 207{