aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorArtem Bityutskiy <Artem.Bityutskiy@nokia.com>2010-07-25 07:29:13 -0400
committerJens Axboe <jaxboe@fusionio.com>2010-08-07 12:53:55 -0400
commitc5f7ad233b8805dae06e694538d8095b19f3c560 (patch)
treec47a1991fe1de055cfd06f5cecb8cfaa9f497ed7 /mm
parent94eac5e62364df4e605e451218ee6024a7ba664f (diff)
writeback: do not lose wake-ups in the forker thread - 1
Currently the forker thread can lose wake-ups which may lead to unnecessary delays in processing bdi works. E.g., consider the following scenario. 1. 'bdi_forker_thread()' walks the 'bdi_list', finds out there is nothing to do, and is about to finish the loop. 2. A bdi thread decides to exit because it was inactive for long time. 3. 'bdi_queue_work()' adds a work to the bdi which just exited, so it wakes up the forker thread. 4. but 'bdi_forker_thread()' executes 'set_current_state(TASK_INTERRUPTIBLE)' and goes sleep. We lose a wake-up. Losing the wake-up is not fatal, but this means that the bdi work processing will be delayed by up to 5 sec. This race is theoretical, I never hit it, but it is worth fixing. The fix is to execute 'set_current_state(TASK_INTERRUPTIBLE)' _before_ walking 'bdi_list', not after. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'mm')
-rw-r--r--mm/backing-dev.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 327e36d1d623..b1dc2d4b9cdd 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -342,6 +342,7 @@ static int bdi_forker_thread(void *ptr)
342 wb_do_writeback(me, 0); 342 wb_do_writeback(me, 0);
343 343
344 spin_lock_bh(&bdi_lock); 344 spin_lock_bh(&bdi_lock);
345 set_current_state(TASK_INTERRUPTIBLE);
345 346
346 /* 347 /*
347 * Check if any existing bdi's have dirty data without 348 * Check if any existing bdi's have dirty data without
@@ -357,8 +358,6 @@ static int bdi_forker_thread(void *ptr)
357 bdi_add_default_flusher_thread(bdi); 358 bdi_add_default_flusher_thread(bdi);
358 } 359 }
359 360
360 set_current_state(TASK_INTERRUPTIBLE);
361
362 if (list_empty(&bdi_pending_list)) { 361 if (list_empty(&bdi_pending_list)) {
363 unsigned long wait; 362 unsigned long wait;
364 363