diff options
author | Tejun Heo <tj@kernel.org> | 2010-09-03 05:56:16 -0400 |
---|---|---|
committer | Jens Axboe <jaxboe@fusionio.com> | 2010-09-10 06:35:36 -0400 |
commit | 4913efe456c987057e5d36a3f0a55422a9072cae (patch) | |
tree | 295f04a7214e1933df3301dd42c12ff3f282a22c /drivers/block/virtio_blk.c | |
parent | 6958f145459ca7ad9715024de97445addacb8510 (diff) |
block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush()
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
requests. Deprecate barrier. All REQ_HARDBARRIERs are failed with
-EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
blk_queue_flush().
blk_queue_flush() takes combinations of REQ_FLUSH and FUA. If a
device has write cache and can flush it, it should set REQ_FLUSH. If
the device can handle FUA writes, it should also set REQ_FUA.
All blk_queue_ordered() users are converted.
* ORDERED_DRAIN is mapped to 0 which is the default value.
* ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
* ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Boaz Harrosh <bharrosh@panasas.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Alasdair G Kergon <agk@redhat.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'drivers/block/virtio_blk.c')
-rw-r--r-- | drivers/block/virtio_blk.c | 25 |
1 files changed, 9 insertions, 16 deletions
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 79652809eee8..d10b635b3946 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c | |||
@@ -388,22 +388,15 @@ static int __devinit virtblk_probe(struct virtio_device *vdev) | |||
388 | vblk->disk->driverfs_dev = &vdev->dev; | 388 | vblk->disk->driverfs_dev = &vdev->dev; |
389 | index++; | 389 | index++; |
390 | 390 | ||
391 | if (virtio_has_feature(vdev, VIRTIO_BLK_F_FLUSH)) { | 391 | /* |
392 | /* | 392 | * If the FLUSH feature is supported we do have support for |
393 | * If the FLUSH feature is supported we do have support for | 393 | * flushing a volatile write cache on the host. Use that to |
394 | * flushing a volatile write cache on the host. Use that | 394 | * implement write barrier support; otherwise, we must assume |
395 | * to implement write barrier support. | 395 | * that the host does not perform any kind of volatile write |
396 | */ | 396 | * caching. |
397 | blk_queue_ordered(q, QUEUE_ORDERED_DRAIN_FLUSH); | 397 | */ |
398 | } else { | 398 | if (virtio_has_feature(vdev, VIRTIO_BLK_F_FLUSH)) |
399 | /* | 399 | blk_queue_flush(q, REQ_FLUSH); |
400 | * If the FLUSH feature is not supported we must assume that | ||
401 | * the host does not perform any kind of volatile write | ||
402 | * caching. We still need to drain the queue to provider | ||
403 | * proper barrier semantics. | ||
404 | */ | ||
405 | blk_queue_ordered(q, QUEUE_ORDERED_DRAIN); | ||
406 | } | ||
407 | 400 | ||
408 | /* If disk is read-only in the host, the guest should obey */ | 401 | /* If disk is read-only in the host, the guest should obey */ |
409 | if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) | 402 | if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO)) |