aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/blkdev.h
Commit message (Collapse)AuthorAge
* block: add a blk_plug_device_unlocked() that grabs the queue lockJens Axboe2008-08-01
| | | | | | | | blk_plug_device() must be called with the queue lock held, so callers often just grab and release the lock for that purpose. Add a helper that does just that. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: Trivial fix for blk_integrity_rq()Martin K. Petersen2008-07-16
| | | | | | | | Fail integrity check gracefully when request does not have a bio attached (BLOCK_PC). Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6Linus Torvalds2008-07-15
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: (80 commits) ide-floppy: fix unfortunate function naming ide-tape: unify idetape_create_read/write_cmd ide: add ide_pc_intr() helper ide-{floppy,scsi}: read Status Register before stopping DMA engine ide-scsi: add more debugging to idescsi_pc_intr() ide-scsi: use pc->callback ide-floppy: add more debugging to idefloppy_pc_intr() ide-tape: always log debug info in idetape_pc_intr() if debugging is enabled ide-tape: add ide_tape_io_buffers() helper ide-tape: factor out DSC handling from idetape_pc_intr() ide-{floppy,tape}: move checking of ->failed_pc to ->callback ide: add ide_issue_pc() helper ide: add PC_FLAG_DRQ_INTERRUPT pc flag ide-scsi: move idescsi_map_sg() call out from idescsi_issue_pc() ide: add ide_transfer_pc() helper ide-scsi: set drive->scsi flag for devices handled by the driver ide-{cd,floppy,tape}: remove checking for drive->scsi ide: add PC_FLAG_ZIP_DRIVE pc flag ide-tape: factor out waiting for good ireason from idetape_transfer_pc() ide-tape: set PC_FLAG_DMA_IN_PROGRESS flag in idetape_transfer_pc() ...
| * block: unexport blk_end_sync_rqFUJITA Tomonori2008-07-15
| | | | | | | | | | | | | | | | | | | | All the users of blk_end_sync_rq has gone (they are converted to use blk_execute_rq). This unexports blk_end_sync_rq. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Borislav Petkov <petkovbb@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* | block: add blk_queue_update_dma_padFUJITA Tomonori2008-07-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds blk_queue_update_dma_pad to prevent LLDs from overwriting the dma pad mask wrongly (we added blk_queue_update_dma_alignment due to the same reason). This also converts libata to use blk_queue_update_dma_pad instead of blk_queue_dma_pad. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Tejun Heo <htejun@gmail.com> Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: extend queue_flag bitopsJens Axboe2008-07-03
| | | | | | | | | | | | Add test_and_clear and test_and_set. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | Add bvec_merge_data to handle stacked devices and ->merge_bvec()Alasdair G Kergon2008-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When devices are stacked, one device's merge_bvec_fn may need to perform the mapping and then call one or more functions for its underlying devices. The following bio fields are used: bio->bi_sector bio->bi_bdev bio->bi_size bio->bi_rw using bio_data_dir() This patch creates a new struct bvec_merge_data holding a copy of those fields to avoid having to change them directly in the struct bio when going down the stack only to have to change them back again on the way back up. (And then when the bio gets mapped for real, the whole exercise gets repeated, but that's a problem for another day...) Signed-off-by: Alasdair G Kergon <agk@redhat.com> Cc: Neil Brown <neilb@suse.de> Cc: Milan Broz <mbroz@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: integrity flags can't use bit ops on unsigned shortJens Axboe2008-07-03
| | | | | | | | | | | | | | Just use normal open coded bit operations instead, they need not be atomic. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | allow userspace to modify scsi command filter on per device basisAdel Gadllah2008-07-03
| | | | | | | | | | | | | | | | | | | | | | | | This patch exports the per-gendisk command filter to user space through sysfs, so it can be changed by the system administrator. All users of the old cmd filter have been converted to use the new one. Original patch from Peter Jones. Signed-off-by: Adel Gadllah <adel.gadllah@gmail.com> Signed-off-by: Peter Jones <pjones@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: integrity cleanupsJens Axboe2008-07-03
| | | | | | | | | | | | | | - No need to check for NULL bio, we'll get an immediate oops anyway. - Make bio_integrity() a proper function. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: blkdev.h cleanup, move iocontext stuff to iocontext.hJens Axboe2008-07-03
| | | | | | | | Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: Block layer data integrity supportMartin K. Petersen2008-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some block devices support verifying the integrity of requests by way of checksums or other protection information that is submitted along with the I/O. This patch implements support for generating and verifying integrity metadata, as well as correctly merging, splitting and cloning bios and requests that have this extra information attached. See Documentation/block/data-integrity.txt for more information. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: kill request_queue_tJens Axboe2008-07-03
|/ | | | | | | | Everything was moved to struct request_queue a few kernel revisions ago, maintaining the deprecated typedef to avoid breaking things. Now the time has come to get rid of that typedef. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* Improve queue_is_locked()Jens Axboe2008-04-29
| | | | | | | | | spin_is_locked() doesn't work on UP without spinlock debugging. Make it safer and just return 1 on UP, so we don't get false positives. The plan is to kill this debug function during the -rc cycle. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* block: fix queue locking verificationLinus Torvalds2008-04-29
| | | | | | | | | | | | | | | | | The new queue_flag_set/clear() functions verify that the queue is locked, but in doing so they will actually instead oops if the queue lock hasn't been initialized at all. So fix the lock debug test to consider the "no lock" case to be unlocked. This way you get a nice WARN_ON_ONCE() instead of a fatal oops. Bug introduced by commit 75ad23bc0fcb4f992a5d06982bf0857ab1738e9e ("block: make queue flags non-atomic"). Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* block: Skip I/O merges when disabledAlan D. Brunelle2008-04-29
| | | | | | | | | | | | | | The block I/O + elevator + I/O scheduler code spend a lot of time trying to merge I/Os -- rightfully so under "normal" circumstances. However, if one were to know that the incoming I/O stream was /very/ random in nature, the cycles are wasted. This patch adds a per-request_queue tunable that (when set) disables merge attempts (beyond the simple one-hit cache check), thus freeing up a non-trivial amount of CPU cycles. Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: add large command supportFUJITA Tomonori2008-04-29
| | | | | | | | | | | | | This patch changes rq->cmd from the static array to a pointer to support large commands. We rarely handle large commands. So for optimization, a struct request still has a static array for a command. rq_init sets rq->cmd pointer to the static array. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: rename and export rq_init()FUJITA Tomonori2008-04-29
| | | | | | | | | | | | | | This rename rq_init() blk_rq_init() and export it. Any path that hands the request to the block layer needs to call it to initialize the request. This is a preparation for large command support, which needs to initialize the request in a proper way (that is, just doing a memset() will not work). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: make queue flags non-atomicNick Piggin2008-04-29
| | | | | | | We can save some atomic ops in the IO path, if we clearly define the rules of how to modify the queue flags. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: fix memory hotplug and bouncing in block layerAndi Kleen2008-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only noticed this while hacking something else, no test case. blk_max_low_pfn is initialized once at bootup by the block layer from max_low_pfn. But max_low_pfn is not necessarily constant over the runtime of the system when you consider memory hotplug. What could happen if that someone adds memory later the block layer wouldn't get updated and then start bouncing memory unnecessarily. Also on 64bit blk_max_low_pfn actually isn't needed because it just disables bouncing essentially and there is no highmem. And nobody can pass pfns > max_low_pfn to the block layer, because those wouldn't have a struct page and I suspect block layer wouldn't be very happy without that. So set BLK_BOUNCE_HIGH to infinity (-1ULL) on 64bit. That avoids the problem of having to update it on memory hotadd. On 32bit I kept the same behaviour because at least on i386 memory hotadd only adds HIGHMEM, never lowmem. BLK_BOUNCE_ANY is always set to infinity on both 32 and 64bit. Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jens Axboe <jens.axboe@oracle.com> Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: move the padding adjustment to blk_rq_map_sgFUJITA Tomonori2008-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule that req->data_len (the true data length) is equal to sum(bio). It broke the scsi command completion code. commit e97a294ef6938512b655b1abf17656cf2b26f709 was introduced to fix the above issue. However, the partial completion code doesn't work with it. The commit is also a layer violation (scsi mid-layer should not know about the block layer's padding). This patch moves the padding adjustment to blk_rq_map_sg (suggested by James). The padding works like the drain buffer. This patch breaks the rule that req->data_len is equal to sum(sg), however, the drain buffer already broke it. So this patch just restores the rule that req->data_len is equal to sub(bio) without breaking anything new. Now when a low level driver needs padding, blk_rq_map_user and blk_rq_map_user_iov guarantee there's enough room for padding. blk_rq_map_sg can safely extend the last entry of a scatter list. blk_rq_map_sg must extend the last entry of a scatter list only for a request that got through bio_copy_user_iov. This patches introduces new REQ_COPY_USER flag. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Tejun Heo <htejun@gmail.com> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: separate out padding from alignmentTejun Heo2008-03-04
| | | | | | | | | | | | | | | | Block layer alignment was used for two different purposes - memory alignment and padding. This causes problems in lower layers because drivers which only require memory alignment ends up with adjusted rq->data_len. Separate out padding such that padding occurs iff driver explicitly requests it. Tomo: restorethe code to update bio in blk_rq_map_user introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa according to padding alignment. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: restore the meaning of rq->data_len to the true data lengthFUJITA Tomonori2008-03-04
| | | | | | | | | | | | | | | | | | The meaning of rq->data_len was changed to the length of an allocated buffer from the true data length. It breaks SG_IO friends and bsg. This patch restores the meaning of rq->data_len to the true data length and adds rq->extra_len to store an extended length (due to drain buffer and padding). This patch also removes the code to update bio in blk_rq_map_user introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa. The commit adjusts bio according to memory alignment (queue_dma_alignment). However, memory alignment is NOT padding alignment. This adjustment also breaks SG_IO friends and bsg. Padding alignment needs to be fixed in a proper way (by a separate patch). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Jens Axboe <axboe@carl.home.kernel.dk>
* block: implement request_queue->dma_drain_neededTejun Heo2008-02-19
| | | | | | | | | | | Draining shouldn't be done for commands where overflow may indicate data integrity issues. Add dma_drain_needed callback to request_queue. Drain buffer is appened iff this function returns non-zero. Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: add request->raw_data_lenTejun Heo2008-02-19
| | | | | | | | | | | | | | | | | | | | | | | With padding and draining moved into it, block layer now may extend requests as directed by queue parameters, so now a request has two sizes - the original request size and the extended size which matches the size of area pointed to by bios and later by sgs. The latter size is what lower layers are primarily interested in when allocating, filling up DMA tables and setting up the controller. Both padding and draining extend the data area to accomodate controller characteristics. As any controller which speaks SCSI can handle underflows, feeding larger data area is safe. So, this patch makes the primary data length field, request->data_len, indicate the size of full data area and add a separate length field, request->raw_data_len, for the unmodified request size. The latter is used to report to higher layer (userland) and where the original request size should be fed to the controller or device. Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: fixup rq_init() a bitJens Axboe2008-02-08
| | | | | | | | | Rearrange fields in cache order and initialize some fields that we didn't previously init. Remove init of ->completion_data, it's part of a union with ->hash. Luckily clearing the rb node is the same as setting it to null! Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: kill swap_io_context()Jens Axboe2008-02-01
| | | | | | | | | | It blindly copies everything in the io_context, including the lock. That doesn't work so well for either lock ordering or lockdep. There seems zero point in swapping io contexts on a request to request merge, so the best point of action is to just remove it. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: new end request handling interface should take unsigned byte countsJens Axboe2008-02-01
| | | | | | | No point in passing signed integers as the byte count, they can never be negative. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: fix warning on compile with CONFIG_BLOCKJens Axboe2008-01-29
| | | | | | struct io_context was not defined, just add an empty forward decl. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* Merge branch 'for-2.6.25' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2008-01-28
|\ | | | | | | | | | | | | | | * 'for-2.6.25' of git://git.kernel.dk/linux-2.6-block: block: implement drain buffers __bio_clone: don't calculate hw/phys segment counts block: allow queue dma_alignment of zero blktrace: Add blktrace ioctls to SCSI generic devices
| * block: implement drain buffersJames Bottomley2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These DMA drain buffer implementations in drivers are pretty horrible to do in terms of manipulating the scatterlist. Plus they're being done at least in drivers/ide and drivers/ata, so we now have code duplication. The one use case for this, as I understand it is AHCI controllers doing PIO mode to mmc devices but translating this to DMA at the controller level. So, what about adding a callback to the block layer that permits the adding of the drain buffer for the problem devices. The idea is that you'd do this in slave_configure after you find one of these devices. The beauty of doing it in the block layer is that it quietly adds the drain buffer to the end of the sg list, so it automatically gets mapped (and unmapped) without anything unusual having to be done to the scatterlist in driver/scsi or drivers/ata and without any alteration to the transfer length. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * block: allow queue dma_alignment of zeroPete Wyckoff2008-01-28
| | | | | | | | | | | | | | | | | | Let queue_dma_alignment return 0 if it was specifically set to 0. This permits devices with no particular alignment restrictions to use arbitrary user space buffers without copying. Signed-off-by: Pete Wyckoff <pw@osc.edu> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | Merge branch 'blk-end-request' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2008-01-28
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'blk-end-request' of git://git.kernel.dk/linux-2.6-block: (30 commits) blk_end_request: changing xsysace (take 4) blk_end_request: changing ub (take 4) blk_end_request: cleanup of request completion (take 4) blk_end_request: cleanup 'uptodate' related code (take 4) blk_end_request: remove/unexport end_that_request_* (take 4) blk_end_request: changing scsi (take 4) blk_end_request: add bidi completion interface (take 4) blk_end_request: changing ide-cd (take 4) blk_end_request: add callback feature (take 4) blk_end_request: changing ide normal caller (take 4) blk_end_request: changing cpqarray (take 4) blk_end_request: changing cciss (take 4) blk_end_request: changing ide-scsi (take 4) blk_end_request: changing s390 (take 4) blk_end_request: changing mmc (take 4) blk_end_request: changing i2o_block (take 4) blk_end_request: changing viocd (take 4) blk_end_request: changing xen-blkfront (take 4) blk_end_request: changing viodasd (take 4) blk_end_request: changing sx8 (take 4) ...
| * | blk_end_request: cleanup 'uptodate' related code (take 4)Kiyoshi Ueda2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts 'uptodate' arguments of no longer exported interfaces, end_that_request_first/last, to 'error', and removes internal conversions for it in blk_end_request interfaces. Also, this patch removes no longer needed end_io_error(). Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | blk_end_request: remove/unexport end_that_request_* (take 4)Kiyoshi Ueda2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the following functions: o end_that_request_first() o end_that_request_chunk() and stops exporting the functions below: o end_that_request_last() Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | blk_end_request: add bidi completion interface (take 4)Kiyoshi Ueda2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a variant of the interface, blk_end_bidi_request(), which completes a bidi request. Bidi request must be completed as a whole, both rq and rq->next_rq at once. So the interface has 2 arguments for completion size. As for ->end_io, only rq->end_io is called (rq->next_rq->end_io is not called). So if special completion handling is needed, the handler must be set to rq->end_io. And the handler must take care of freeing next_rq too, since the interface doesn't care of it if rq->end_io is not NULL. Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | blk_end_request: add callback feature (take 4)Kiyoshi Ueda2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a variant of the interface, blk_end_request_callback(), which has driver callback feature. Drivers may need to do special works between end_that_request_first() and end_that_request_last(). For such drivers, blk_end_request_callback() allows it to pass a callback function which is called between end_that_request_first() and end_that_request_last(). This interface is only for fallback of other blk_end_request interfaces. Drivers should avoid their tricky behaviors and use other interfaces as much as possible. Currently, only one driver, ide-cd, needs this interface. So this interface should/will be removed, after the driver removes such tricky behaviors. o ide-cd (cdrom_newpc_intr()) In PIO mode, cdrom_newpc_intr() needs to defer end_that_request_last() until the device clears DRQ_STAT and raises an interrupt after end_that_request_first(). So end_that_request_first() and end_that_request_last() are called separately in cdrom_newpc_intr(). This means blk_end_request_callback() has to return without completing request even if no leftover in the request. To satisfy the requirement, callback function has return value so that drivers can tell blk_end_request_callback() to return without completing request. Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | blk_end_request: add/export functions to get request size (take 4)Kiyoshi Ueda2008-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds/exports functions to get the size of request in bytes. They are useful because blk_end_request interfaces take bytes as a completed I/O size instead of sectors. Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | blk_end_request: add new request completion interface (take 4)Kiyoshi Ueda2008-01-28
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds 2 new interfaces for request completion: o blk_end_request() : called without queue lock o __blk_end_request() : called with queue lock held blk_end_request takes 'error' as an argument instead of 'uptodate', which current end_that_request_* take. The meanings of values are below and the value is used when bio is completed. 0 : success < 0 : error Some device drivers call some generic functions below between end_that_request_{first/chunk} and end_that_request_last(). o add_disk_randomness() o blk_queue_end_tag() o blkdev_dequeue_request() These are called in the blk_end_request interfaces as a part of generic request completion. So all device drivers become to call above functions. To decide whether to call blkdev_dequeue_request(), blk_end_request uses list_empty(&rq->queuelist) (blk_queued_rq() macro is added for it). So drivers must re-initialize it using list_init() or so before calling blk_end_request if drivers use it for its specific purpose. (Currently, there is no driver which completes request without re-initializing the queuelist after used it. So rq->queuelist can be used for the purpose above.) "Normal" drivers can be converted to use blk_end_request() in a standard way shown below. a) end_that_request_{chunk/first} spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() b) spin_lock_irqsave() end_that_request_{chunk/first} (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => spin_lock_irqsave() __blk_end_request() spin_unlock_irqsave() c) spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() or spin_lock_irqsave() __blk_end_request() spin_unlock_irqrestore() Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | io context sharing: preliminary supportJens Axboe2008-01-28
| | | | | | | | | | | | | | Detach task state from ioc, instead keep track of how many processes are accessing the ioc. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | ioprio: move io priority from task_struct to io_contextJens Axboe2008-01-28
|/ | | | | | | This is where it belongs and then it doesn't take up space for a process that doesn't do IO. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* ide: remove REQ_TYPE_ATA_CMDBartlomiej Zolnierkiewicz2008-01-26
| | | | | | | | | Based on the earlier work by Tejun Heo. All users are gone so we can finally remove it. Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6Linus Torvalds2008-01-25
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (200 commits) [SCSI] usbstorage: use last_sector_bug flag universally [SCSI] libsas: abstract STP task status into a function [SCSI] ultrastor: clean up inline asm warnings [SCSI] aic7xxx: fix firmware build [SCSI] aacraid: fib context lock for management ioctls [SCSI] ch: remove forward declarations [SCSI] ch: fix device minor number management bug [SCSI] ch: handle class_device_create failure properly [SCSI] NCR5380: fix section mismatch [SCSI] sg: fix /proc/scsi/sg/devices when no SCSI devices [SCSI] IB/iSER: add logical unit reset support [SCSI] don't use __GFP_DMA for sense buffers if not required [SCSI] use dynamically allocated sense buffer [SCSI] scsi.h: add macro for enclosure bit of inquiry data [SCSI] sd: add fix for devices with last sector access problems [SCSI] fix pcmcia compile problem [SCSI] aacraid: add Voodoo Lite class of cards. [SCSI] aacraid: add new driver features flags [SCSI] qla2xxx: Update version number to 8.02.00-k7. [SCSI] qla2xxx: Issue correct MBC_INITIALIZE_FIRMWARE command. ...
| * [SCSI] block: Introduce new blk_queue_update_dma_alignment interfaceJames Bottomley2008-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The purpose of this is to allow stacked alignment settings, with the ultimate queue alignment being set to the largest alignment requirement in the stack. The reason for this is so that the SCSI mid-layer can relax the default alignment requirements (which are basically causing a lot of superfluous copying to go on in the SG_IO interface) while allowing transports, devices or HBAs to add stricter limits if they need them. Acked-by: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
* | ide: remove REQ_TYPE_ATA_TASKBartlomiej Zolnierkiewicz2008-01-25
|/ | | | | | | | | Based on the earlier work by Tejun Heo. All users are gone so we can finally remove it. Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add UNPLUG traces to all appropriate placesAlan D. Brunelle2007-11-09
| | | | | | | | Added blk_unplug interface, allowing all invocations of unplugs to result in a generated blktrace UNPLUG. Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* [BLOCK] Fix bad sharing of tag busy list on queues with shared tag mapsJens Axboe2007-10-29
| | | | | | | | | | | | | | | | | | For the locking to work, only the tag map and tag bit map may be shared (incidentally, I was just explaining this to Nick yesterday, but I apparently didn't review the code well enough myself). But we also share the busy list! The busy_list must be queue private, or we need a block_queue_tag covering lock as well. So we have to move the busy_list to the queue. This'll work fine, and it'll actually also fix a problem with blk_queue_invalidate_tags() which will invalidate tags across all shared queues. This is a bit confusing, the low level driver should call it for each queue seperately since otherwise you cannot kill tags on just a single queue for eg a hard drive that stops responding. Since the function has no callers currently, it's not an issue. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: convert blkdev_issue_flush() to use empty barriersJens Axboe2007-10-16
| | | | | | | Then we can get rid of ->issue_flush_fn() and all the driver private implementations of that. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: Initial support for data-less (or empty) barrier supportJens Axboe2007-10-16
| | | | | | | | | This implements functionality to pass down or insert a barrier in a queue, without having data attached to it. The ->prepare_flush_fn() infrastructure from data barriers are reused to provide this functionality. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* block: add end_queued_request() and end_dequeued_request() helpersJens Axboe2007-10-16
| | | | | | | We can use this helper in the elevator core for BLKPREP_KILL, and it'll also be useful for the empty barrier patch. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>