diff options
author | Jens Axboe <axboe@kernel.dk> | 2018-10-24 05:39:36 -0400 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-11-07 15:42:32 -0500 |
commit | 7ca01926463a15f5d2681458643b2453930b873a (patch) | |
tree | 06ea203ffd839dfb7dfe0be9a10287679b898d36 /Documentation/block | |
parent | 2cdf2caecda6cb16c24c6bdd2484d4cec99cfbb3 (diff) |
block: remove legacy rq tagging
It's now unused, kill it.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'Documentation/block')
-rw-r--r-- | Documentation/block/biodoc.txt | 88 |
1 files changed, 0 insertions, 88 deletions
diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt index 207eca58efaa..ac18b488cb5e 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.txt | |||
@@ -65,7 +65,6 @@ Description of Contents: | |||
65 | 3.2.3 I/O completion | 65 | 3.2.3 I/O completion |
66 | 3.2.4 Implications for drivers that do not interpret bios (don't handle | 66 | 3.2.4 Implications for drivers that do not interpret bios (don't handle |
67 | multiple segments) | 67 | multiple segments) |
68 | 3.2.5 Request command tagging | ||
69 | 3.3 I/O submission | 68 | 3.3 I/O submission |
70 | 4. The I/O scheduler | 69 | 4. The I/O scheduler |
71 | 5. Scalability related changes | 70 | 5. Scalability related changes |
@@ -708,93 +707,6 @@ is crossed on completion of a transfer. (The end*request* functions should | |||
708 | be used if only if the request has come down from block/bio path, not for | 707 | be used if only if the request has come down from block/bio path, not for |
709 | direct access requests which only specify rq->buffer without a valid rq->bio) | 708 | direct access requests which only specify rq->buffer without a valid rq->bio) |
710 | 709 | ||
711 | 3.2.5 Generic request command tagging | ||
712 | |||
713 | 3.2.5.1 Tag helpers | ||
714 | |||
715 | Block now offers some simple generic functionality to help support command | ||
716 | queueing (typically known as tagged command queueing), ie manage more than | ||
717 | one outstanding command on a queue at any given time. | ||
718 | |||
719 | blk_queue_init_tags(struct request_queue *q, int depth) | ||
720 | |||
721 | Initialize internal command tagging structures for a maximum | ||
722 | depth of 'depth'. | ||
723 | |||
724 | blk_queue_free_tags((struct request_queue *q) | ||
725 | |||
726 | Teardown tag info associated with the queue. This will be done | ||
727 | automatically by block if blk_queue_cleanup() is called on a queue | ||
728 | that is using tagging. | ||
729 | |||
730 | The above are initialization and exit management, the main helpers during | ||
731 | normal operations are: | ||
732 | |||
733 | blk_queue_start_tag(struct request_queue *q, struct request *rq) | ||
734 | |||
735 | Start tagged operation for this request. A free tag number between | ||
736 | 0 and 'depth' is assigned to the request (rq->tag holds this number), | ||
737 | and 'rq' is added to the internal tag management. If the maximum depth | ||
738 | for this queue is already achieved (or if the tag wasn't started for | ||
739 | some other reason), 1 is returned. Otherwise 0 is returned. | ||
740 | |||
741 | blk_queue_end_tag(struct request_queue *q, struct request *rq) | ||
742 | |||
743 | End tagged operation on this request. 'rq' is removed from the internal | ||
744 | book keeping structures. | ||
745 | |||
746 | To minimize struct request and queue overhead, the tag helpers utilize some | ||
747 | of the same request members that are used for normal request queue management. | ||
748 | This means that a request cannot both be an active tag and be on the queue | ||
749 | list at the same time. blk_queue_start_tag() will remove the request, but | ||
750 | the driver must remember to call blk_queue_end_tag() before signalling | ||
751 | completion of the request to the block layer. This means ending tag | ||
752 | operations before calling end_that_request_last()! For an example of a user | ||
753 | of these helpers, see the IDE tagged command queueing support. | ||
754 | |||
755 | 3.2.5.2 Tag info | ||
756 | |||
757 | Some block functions exist to query current tag status or to go from a | ||
758 | tag number to the associated request. These are, in no particular order: | ||
759 | |||
760 | blk_queue_tagged(q) | ||
761 | |||
762 | Returns 1 if the queue 'q' is using tagging, 0 if not. | ||
763 | |||
764 | blk_queue_tag_request(q, tag) | ||
765 | |||
766 | Returns a pointer to the request associated with tag 'tag'. | ||
767 | |||
768 | blk_queue_tag_depth(q) | ||
769 | |||
770 | Return current queue depth. | ||
771 | |||
772 | blk_queue_tag_queue(q) | ||
773 | |||
774 | Returns 1 if the queue can accept a new queued command, 0 if we are | ||
775 | at the maximum depth already. | ||
776 | |||
777 | blk_queue_rq_tagged(rq) | ||
778 | |||
779 | Returns 1 if the request 'rq' is tagged. | ||
780 | |||
781 | 3.2.5.2 Internal structure | ||
782 | |||
783 | Internally, block manages tags in the blk_queue_tag structure: | ||
784 | |||
785 | struct blk_queue_tag { | ||
786 | struct request **tag_index; /* array or pointers to rq */ | ||
787 | unsigned long *tag_map; /* bitmap of free tags */ | ||
788 | struct list_head busy_list; /* fifo list of busy tags */ | ||
789 | int busy; /* queue depth */ | ||
790 | int max_depth; /* max queue depth */ | ||
791 | }; | ||
792 | |||
793 | Most of the above is simple and straight forward, however busy_list may need | ||
794 | a bit of explaining. Normally we don't care too much about request ordering, | ||
795 | but in the event of any barrier requests in the tag queue we need to ensure | ||
796 | that requests are restarted in the order they were queue. | ||
797 | |||
798 | 3.3 I/O Submission | 710 | 3.3 I/O Submission |
799 | 711 | ||
800 | The routine submit_bio() is used to submit a single io. Higher level i/o | 712 | The routine submit_bio() is used to submit a single io. Higher level i/o |