diff options
author | Wu Fengguang <fengguang.wu@intel.com> | 2011-08-03 16:30:36 -0400 |
---|---|---|
committer | Wu Fengguang <fengguang.wu@intel.com> | 2011-12-18 01:20:33 -0500 |
commit | bdaac4902a8225bf247ecaeac46c4b2980cc70e5 (patch) | |
tree | cf6c065d9e46c53b45248f634491a6d864563382 /mm | |
parent | 82791940545be38810dfd5e03ee701e749f04aab (diff) |
writeback: balanced_rate cannot exceed write bandwidth
Add an upper limit to balanced_rate according to the below inequality.
This filters out some rare but huge singular points, which at least
enables more readable gnuplot figures.
When there are N dd dirtiers,
balanced_dirty_ratelimit = write_bw / N
So it holds that
balanced_dirty_ratelimit <= write_bw
The singular points originate from dirty_rate in the below formular:
balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate
where
dirty_rate = (number of page dirties in the past 200ms) / 200ms
In the extreme case, if all dd tasks suddenly get blocked on something
else and hence no pages are dirtied at all, dirty_rate will be 0 and
balanced_dirty_ratelimit will be inf. This could happen in reality.
Note that these huge singular points are not a real threat, since they
are _guaranteed_ to be filtered out by the
min(balanced_dirty_ratelimit, task_ratelimit)
line in bdi_update_dirty_ratelimit(). task_ratelimit is based on the
number of dirty pages, which will never _suddenly_ fly away like
balanced_dirty_ratelimit. So any weirdly large balanced_dirty_ratelimit
will be cut down to the level of task_ratelimit.
There won't be tiny singular points though, as long as the dirty pages
lie inside the dirty throttling region (above the freerun region).
Because there the dd tasks will be throttled by balanced_dirty_pages()
and won't be able to suddenly dirty much more pages than average.
Acked-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page-writeback.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 936dc7b61dc6..0ae2008eb540 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c | |||
@@ -822,6 +822,11 @@ static void bdi_update_dirty_ratelimit(struct backing_dev_info *bdi, | |||
822 | */ | 822 | */ |
823 | balanced_dirty_ratelimit = div_u64((u64)task_ratelimit * write_bw, | 823 | balanced_dirty_ratelimit = div_u64((u64)task_ratelimit * write_bw, |
824 | dirty_rate | 1); | 824 | dirty_rate | 1); |
825 | /* | ||
826 | * balanced_dirty_ratelimit ~= (write_bw / N) <= write_bw | ||
827 | */ | ||
828 | if (unlikely(balanced_dirty_ratelimit > write_bw)) | ||
829 | balanced_dirty_ratelimit = write_bw; | ||
825 | 830 | ||
826 | /* | 831 | /* |
827 | * We could safely do this and return immediately: | 832 | * We could safely do this and return immediately: |