aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorJustin TerAvest <teravest@google.com>2011-03-01 15:05:08 -0500
committerJens Axboe <jaxboe@fusionio.com>2011-03-01 15:05:08 -0500
commit0bbfeb8320421989d3e12bd95fae86b9ac0712aa (patch)
treea23bb6686bd2586fe7c27d0773c63f9aeb254cd1 /Documentation
parent6fae9c25134baffbeeb20031479e7ff6f6d8eec0 (diff)
cfq-iosched: Always provide group isolation.
Effectively, make group_isolation=1 the default and remove the tunable. The setting group_isolation=0 was because by default we idle on sync-noidle tree and on fast devices, this can be very harmful for throughput. However, this problem can also be addressed by tuning slice_idle and possibly group_idle on faster storage devices. This change simplifies the CFQ code by removing the feature entirely. Signed-off-by: Justin TerAvest <teravest@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/cgroups/blkio-controller.txt28
1 files changed, 0 insertions, 28 deletions
diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt
index 4ed7b5ceeed2..d915c16df42c 100644
--- a/Documentation/cgroups/blkio-controller.txt
+++ b/Documentation/cgroups/blkio-controller.txt
@@ -343,34 +343,6 @@ Common files among various policies
343 343
344CFQ sysfs tunable 344CFQ sysfs tunable
345================= 345=================
346/sys/block/<disk>/queue/iosched/group_isolation
347-----------------------------------------------
348
349If group_isolation=1, it provides stronger isolation between groups at the
350expense of throughput. By default group_isolation is 0. In general that
351means that if group_isolation=0, expect fairness for sequential workload
352only. Set group_isolation=1 to see fairness for random IO workload also.
353
354Generally CFQ will put random seeky workload in sync-noidle category. CFQ
355will disable idling on these queues and it does a collective idling on group
356of such queues. Generally these are slow moving queues and if there is a
357sync-noidle service tree in each group, that group gets exclusive access to
358disk for certain period. That means it will bring the throughput down if
359group does not have enough IO to drive deeper queue depths and utilize disk
360capacity to the fullest in the slice allocated to it. But the flip side is
361that even a random reader should get better latencies and overall throughput
362if there are lots of sequential readers/sync-idle workload running in the
363system.
364
365If group_isolation=0, then CFQ automatically moves all the random seeky queues
366in the root group. That means there will be no service differentiation for
367that kind of workload. This leads to better throughput as we do collective
368idling on root sync-noidle tree.
369
370By default one should run with group_isolation=0. If that is not sufficient
371and one wants stronger isolation between groups, then set group_isolation=1
372but this will come at cost of reduced throughput.
373
374/sys/block/<disk>/queue/iosched/slice_idle 346/sys/block/<disk>/queue/iosched/slice_idle
375------------------------------------------ 347------------------------------------------
376On a faster hardware CFQ can be slow, especially with sequential workload. 348On a faster hardware CFQ can be slow, especially with sequential workload.