diff options
author | Justin TerAvest <teravest@google.com> | 2011-03-01 15:05:08 -0500 |
---|---|---|
committer | Jens Axboe <jaxboe@fusionio.com> | 2011-03-01 15:05:08 -0500 |
commit | 0bbfeb8320421989d3e12bd95fae86b9ac0712aa (patch) | |
tree | a23bb6686bd2586fe7c27d0773c63f9aeb254cd1 /Documentation/cgroups | |
parent | 6fae9c25134baffbeeb20031479e7ff6f6d8eec0 (diff) |
cfq-iosched: Always provide group isolation.
Effectively, make group_isolation=1 the default and remove the tunable.
The setting group_isolation=0 was because by default we idle on
sync-noidle tree and on fast devices, this can be very harmful for
throughput.
However, this problem can also be addressed by tuning slice_idle and
possibly group_idle on faster storage devices.
This change simplifies the CFQ code by removing the feature entirely.
Signed-off-by: Justin TerAvest <teravest@google.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'Documentation/cgroups')
-rw-r--r-- | Documentation/cgroups/blkio-controller.txt | 28 |
1 files changed, 0 insertions, 28 deletions
diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt index 4ed7b5ceeed2..d915c16df42c 100644 --- a/Documentation/cgroups/blkio-controller.txt +++ b/Documentation/cgroups/blkio-controller.txt | |||
@@ -343,34 +343,6 @@ Common files among various policies | |||
343 | 343 | ||
344 | CFQ sysfs tunable | 344 | CFQ sysfs tunable |
345 | ================= | 345 | ================= |
346 | /sys/block/<disk>/queue/iosched/group_isolation | ||
347 | ----------------------------------------------- | ||
348 | |||
349 | If group_isolation=1, it provides stronger isolation between groups at the | ||
350 | expense of throughput. By default group_isolation is 0. In general that | ||
351 | means that if group_isolation=0, expect fairness for sequential workload | ||
352 | only. Set group_isolation=1 to see fairness for random IO workload also. | ||
353 | |||
354 | Generally CFQ will put random seeky workload in sync-noidle category. CFQ | ||
355 | will disable idling on these queues and it does a collective idling on group | ||
356 | of such queues. Generally these are slow moving queues and if there is a | ||
357 | sync-noidle service tree in each group, that group gets exclusive access to | ||
358 | disk for certain period. That means it will bring the throughput down if | ||
359 | group does not have enough IO to drive deeper queue depths and utilize disk | ||
360 | capacity to the fullest in the slice allocated to it. But the flip side is | ||
361 | that even a random reader should get better latencies and overall throughput | ||
362 | if there are lots of sequential readers/sync-idle workload running in the | ||
363 | system. | ||
364 | |||
365 | If group_isolation=0, then CFQ automatically moves all the random seeky queues | ||
366 | in the root group. That means there will be no service differentiation for | ||
367 | that kind of workload. This leads to better throughput as we do collective | ||
368 | idling on root sync-noidle tree. | ||
369 | |||
370 | By default one should run with group_isolation=0. If that is not sufficient | ||
371 | and one wants stronger isolation between groups, then set group_isolation=1 | ||
372 | but this will come at cost of reduced throughput. | ||
373 | |||
374 | /sys/block/<disk>/queue/iosched/slice_idle | 346 | /sys/block/<disk>/queue/iosched/slice_idle |
375 | ------------------------------------------ | 347 | ------------------------------------------ |
376 | On a faster hardware CFQ can be slow, especially with sequential workload. | 348 | On a faster hardware CFQ can be slow, especially with sequential workload. |