aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorWu Fengguang <fengguang.wu@intel.com>2011-11-23 12:44:41 -0500
committerWu Fengguang <fengguang.wu@intel.com>2011-12-07 21:49:20 -0500
commitaed21ad28b1323b2807faea019e5ac388a7bc837 (patch)
tree64d6bf0e86b7d256621420d2266d5c7c29bb5d50 /mm
parenta50527b19c62c808a7fca022816fff88a50b948d (diff)
writeback: comment on the bdi dirty threshold
We do "floating proportions" to let active devices to grow its target share of dirty pages and stalled/inactive devices to decrease its target share over time. It works well except in the case of "an inactive disk suddenly goes busy", where the initial target share may be too small. To mitigate this, bdi_position_ratio() has the below line to raise a small bdi_thresh when it's safe to do so, so that the disk be feed with enough dirty pages for efficient IO and in turn fast rampup of bdi_thresh: bdi_thresh = max(bdi_thresh, (limit - dirty) / 8); balance_dirty_pages() normally does negative feedback control which adjusts ratelimit to balance the bdi dirty pages around the target. In some extreme cases when that is not enough, it will have to block the tasks completely until the bdi dirty pages drop below bdi_thresh. Acked-by: Jan Kara <jack@suse.cz> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'mm')
-rw-r--r--mm/page-writeback.c16
1 files changed, 14 insertions, 2 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 71252486bc6f..155efca4c123 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -411,8 +411,13 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty)
411 * 411 *
412 * Returns @bdi's dirty limit in pages. The term "dirty" in the context of 412 * Returns @bdi's dirty limit in pages. The term "dirty" in the context of
413 * dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages. 413 * dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages.
414 * And the "limit" in the name is not seriously taken as hard limit in 414 *
415 * balance_dirty_pages(). 415 * Note that balance_dirty_pages() will only seriously take it as a hard limit
416 * when sleeping max_pause per page is not enough to keep the dirty pages under
417 * control. For example, when the device is completely stalled due to some error
418 * conditions, or when there are 1000 dd tasks writing to a slow 10MB/s USB key.
419 * In the other normal situations, it acts more gently by throttling the tasks
420 * more (rather than completely block them) when the bdi dirty pages go high.
416 * 421 *
417 * It allocates high/low dirty limits to fast/slow devices, in order to prevent 422 * It allocates high/low dirty limits to fast/slow devices, in order to prevent
418 * - starving fast devices 423 * - starving fast devices
@@ -594,6 +599,13 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi,
594 */ 599 */
595 if (unlikely(bdi_thresh > thresh)) 600 if (unlikely(bdi_thresh > thresh))
596 bdi_thresh = thresh; 601 bdi_thresh = thresh;
602 /*
603 * It's very possible that bdi_thresh is close to 0 not because the
604 * device is slow, but that it has remained inactive for long time.
605 * Honour such devices a reasonable good (hopefully IO efficient)
606 * threshold, so that the occasional writes won't be blocked and active
607 * writes can rampup the threshold quickly.
608 */
597 bdi_thresh = max(bdi_thresh, (limit - dirty) / 8); 609 bdi_thresh = max(bdi_thresh, (limit - dirty) / 8);
598 /* 610 /*
599 * scale global setpoint to bdi's: 611 * scale global setpoint to bdi's: