summaryrefslogtreecommitdiffstats
path: root/include/linux/iocontext.h
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2011-12-13 18:33:42 -0500
committerJens Axboe <axboe@kernel.dk>2011-12-13 18:33:42 -0500
commitf1f8cc94651738b418ba54c039df536303b91704 (patch)
treeeb8bc5a33dec104ab32a935a5bb1e1da2e7cdd34 /include/linux/iocontext.h
parent9b84cacd013996f244d85b3d873287c2a8f88658 (diff)
block, cfq: move icq creation and rq->elv.icq association to block core
Now block layer knows everything necessary to create and associate icq's with requests. Move ioc_create_icq() to blk-ioc.c and update get_request() such that, if elevator_type->icq_size is set, requests are automatically associated with their matching icq's before elv_set_request(). io_context reference is also managed by block core on request alloc/free. * Only ioprio/cgroup changed handling remains from cfq_get_cic(). Collapsed into cfq_set_request(). * This removes queue kicking on icq allocation failure (for now). As icq allocation failure is rare and the only effect of queue kicking achieved was possibily accelerating queue processing, this change shouldn't be noticeable. There is a larger underlying problem. Unlike request allocation, icq allocation is not guaranteed to succeed eventually after retries. The number of icq is unbound and thus mempool can't be the solution either. This effectively adds allocation dependency on memory free path and thus possibility of deadlock. This usually wouldn't happen because icq allocation is not a hot path and, even when the condition triggers, it's highly unlikely that none of the writeback workers already has icq. However, this is still possible especially if elevator is being switched under high memory pressure, so we better get it fixed. Probably the only solution is just bypassing elevator and appending to dispatch queue on any elevator allocation failure. * Comment added to explain how icq's are managed and synchronized. This completes cleanup of io_context interface. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include/linux/iocontext.h')
-rw-r--r--include/linux/iocontext.h59
1 files changed, 59 insertions, 0 deletions
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index ac390a34c0e7..7e1371c4bccf 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -10,6 +10,65 @@ enum {
10 ICQ_CGROUP_CHANGED, 10 ICQ_CGROUP_CHANGED,
11}; 11};
12 12
13/*
14 * An io_cq (icq) is association between an io_context (ioc) and a
15 * request_queue (q). This is used by elevators which need to track
16 * information per ioc - q pair.
17 *
18 * Elevator can request use of icq by setting elevator_type->icq_size and
19 * ->icq_align. Both size and align must be larger than that of struct
20 * io_cq and elevator can use the tail area for private information. The
21 * recommended way to do this is defining a struct which contains io_cq as
22 * the first member followed by private members and using its size and
23 * align. For example,
24 *
25 * struct snail_io_cq {
26 * struct io_cq icq;
27 * int poke_snail;
28 * int feed_snail;
29 * };
30 *
31 * struct elevator_type snail_elv_type {
32 * .ops = { ... },
33 * .icq_size = sizeof(struct snail_io_cq),
34 * .icq_align = __alignof__(struct snail_io_cq),
35 * ...
36 * };
37 *
38 * If icq_size is set, block core will manage icq's. All requests will
39 * have its ->elv.icq field set before elevator_ops->elevator_set_req_fn()
40 * is called and be holding a reference to the associated io_context.
41 *
42 * Whenever a new icq is created, elevator_ops->elevator_init_icq_fn() is
43 * called and, on destruction, ->elevator_exit_icq_fn(). Both functions
44 * are called with both the associated io_context and queue locks held.
45 *
46 * Elevator is allowed to lookup icq using ioc_lookup_icq() while holding
47 * queue lock but the returned icq is valid only until the queue lock is
48 * released. Elevators can not and should not try to create or destroy
49 * icq's.
50 *
51 * As icq's are linked from both ioc and q, the locking rules are a bit
52 * complex.
53 *
54 * - ioc lock nests inside q lock.
55 *
56 * - ioc->icq_list and icq->ioc_node are protected by ioc lock.
57 * q->icq_list and icq->q_node by q lock.
58 *
59 * - ioc->icq_tree and ioc->icq_hint are protected by ioc lock, while icq
60 * itself is protected by q lock. However, both the indexes and icq
61 * itself are also RCU managed and lookup can be performed holding only
62 * the q lock.
63 *
64 * - icq's are not reference counted. They are destroyed when either the
65 * ioc or q goes away. Each request with icq set holds an extra
66 * reference to ioc to ensure it stays until the request is completed.
67 *
68 * - Linking and unlinking icq's are performed while holding both ioc and q
69 * locks. Due to the lock ordering, q exit is simple but ioc exit
70 * requires reverse-order double lock dance.
71 */
13struct io_cq { 72struct io_cq {
14 struct request_queue *q; 73 struct request_queue *q;
15 struct io_context *ioc; 74 struct io_context *ioc;