aboutsummaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorVivek Goyal <vgoyal@redhat.com>2010-12-01 13:34:46 -0500
committerJens Axboe <jaxboe@fusionio.com>2010-12-01 13:34:46 -0500
commitd1ae8ffdfaa16b2ab2e9346e81cf0ab6eaaae347 (patch)
treed34fabaf556ec4471e076b4794fa1de8515956f0 /block
parent5478755616ae2ef1ce144dded589b62b2a50d575 (diff)
blk-throttle: Trim/adjust slice_end once a bio has been dispatched
o During some testing I did following and noticed throttling stops working. - Put a very low limit on a cgroup, say 1 byte per second. - Start some reads, this will set slice_end to a very high value. - Change the limit to higher value say 1MB/s - Now IO unthrottles and finishes as expected. - Try to do the read again but IO is not limited to 1MB/s as expected. o What is happening. - Initially low value of limit sets slice_end to a very high value. - During updation of limit, slice_end is not being truncated. - Very high value of slice_end leads to keeping the existing slice valid for a very long time and new slice does not start. - tg_may_dispatch() is called in blk_throtle_bio(), and trim_slice() is not called in this path. So slice_start is some old value and practically we are able to do huge amount of IO. o There are many ways it can be fixed. I have fixed it by trying to adjust/cleanup slice_end in trim_slice(). Generally we extend slices if bio is big and can't be dispatched in one slice. After dispatch of bio, readjust the slice_end to make sure we don't end up with huge values. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'block')
-rw-r--r--block/blk-throttle.c16
1 files changed, 16 insertions, 0 deletions
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 004be80fd894..2d134b7c40a3 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -355,6 +355,12 @@ throtl_start_new_slice(struct throtl_data *td, struct throtl_grp *tg, bool rw)
355 tg->slice_end[rw], jiffies); 355 tg->slice_end[rw], jiffies);
356} 356}
357 357
358static inline void throtl_set_slice_end(struct throtl_data *td,
359 struct throtl_grp *tg, bool rw, unsigned long jiffy_end)
360{
361 tg->slice_end[rw] = roundup(jiffy_end, throtl_slice);
362}
363
358static inline void throtl_extend_slice(struct throtl_data *td, 364static inline void throtl_extend_slice(struct throtl_data *td,
359 struct throtl_grp *tg, bool rw, unsigned long jiffy_end) 365 struct throtl_grp *tg, bool rw, unsigned long jiffy_end)
360{ 366{
@@ -391,6 +397,16 @@ throtl_trim_slice(struct throtl_data *td, struct throtl_grp *tg, bool rw)
391 if (throtl_slice_used(td, tg, rw)) 397 if (throtl_slice_used(td, tg, rw))
392 return; 398 return;
393 399
400 /*
401 * A bio has been dispatched. Also adjust slice_end. It might happen
402 * that initially cgroup limit was very low resulting in high
403 * slice_end, but later limit was bumped up and bio was dispached
404 * sooner, then we need to reduce slice_end. A high bogus slice_end
405 * is bad because it does not allow new slice to start.
406 */
407
408 throtl_set_slice_end(td, tg, rw, jiffies + throtl_slice);
409
394 time_elapsed = jiffies - tg->slice_start[rw]; 410 time_elapsed = jiffies - tg->slice_start[rw];
395 411
396 nr_slices = time_elapsed / throtl_slice; 412 nr_slices = time_elapsed / throtl_slice;