diff options
Diffstat (limited to 'drivers/block/paride/Transition-notes')
-rw-r--r-- | drivers/block/paride/Transition-notes | 128 |
1 files changed, 128 insertions, 0 deletions
diff --git a/drivers/block/paride/Transition-notes b/drivers/block/paride/Transition-notes new file mode 100644 index 000000000000..70374907c020 --- /dev/null +++ b/drivers/block/paride/Transition-notes | |||
@@ -0,0 +1,128 @@ | |||
1 | Lemma 1: | ||
2 | If ps_tq is scheduled, ps_tq_active is 1. ps_tq_int() can be called | ||
3 | only when ps_tq_active is 1. | ||
4 | Proof: All assignments to ps_tq_active and all scheduling of ps_tq happen | ||
5 | under ps_spinlock. There are three places where that can happen: | ||
6 | one in ps_set_intr() (A) and two in ps_tq_int() (B and C). | ||
7 | Consider the sequnce of these events. A can not be preceded by | ||
8 | anything except B, since it is under if (!ps_tq_active) under | ||
9 | ps_spinlock. C is always preceded by B, since we can't reach it | ||
10 | other than through B and we don't drop ps_spinlock between them. | ||
11 | IOW, the sequence is A?(BA|BC|B)*. OTOH, number of B can not exceed | ||
12 | the sum of numbers of A and C, since each call of ps_tq_int() is | ||
13 | the result of ps_tq execution. Therefore, the sequence starts with | ||
14 | A and each B is preceded by either A or C. Moments when we enter | ||
15 | ps_tq_int() are sandwiched between {A,C} and B in that sequence, | ||
16 | since at any time number of B can not exceed the number of these | ||
17 | moments which, in turn, can not exceed the number of A and C. | ||
18 | In other words, the sequence of events is (A or C set ps_tq_active to | ||
19 | 1 and schedule ps_tq, ps_tq is executed, ps_tq_int() is entered, | ||
20 | B resets ps_tq_active)*. | ||
21 | |||
22 | |||
23 | consider the following area: | ||
24 | * in do_pd_request1(): to calls of pi_do_claimed() and return in | ||
25 | case when pd_req is NULL. | ||
26 | * in next_request(): to call of do_pd_request1() | ||
27 | * in do_pd_read(): to call of ps_set_intr() | ||
28 | * in do_pd_read_start(): to calls of pi_do_claimed(), next_request() | ||
29 | and ps_set_intr() | ||
30 | * in do_pd_read_drq(): to calls of pi_do_claimed() and next_request() | ||
31 | * in do_pd_write(): to call of ps_set_intr() | ||
32 | * in do_pd_write_start(): to calls of pi_do_claimed(), next_request() | ||
33 | and ps_set_intr() | ||
34 | * in do_pd_write_done(): to calls of pi_do_claimed() and next_request() | ||
35 | * in ps_set_intr(): to check for ps_tq_active and to scheduling | ||
36 | ps_tq if ps_tq_active was 0. | ||
37 | * in ps_tq_int(): from the moment when we get ps_spinlock() to the | ||
38 | return, call of con() or scheduling ps_tq. | ||
39 | * in pi_schedule_claimed() when called from pi_do_claimed() called from | ||
40 | pd.c, everything until returning 1 or setting or setting ->claim_cont | ||
41 | on the path that returns 0 | ||
42 | * in pi_do_claimed() when called from pd.c, everything until the call | ||
43 | of pi_do_claimed() plus the everything until the call of cont() if | ||
44 | pi_do_claimed() has returned 1. | ||
45 | * in pi_wake_up() called for PIA that belongs to pd.c, everything from | ||
46 | the moment when pi_spinlock has been acquired. | ||
47 | |||
48 | Lemma 2: | ||
49 | 1) at any time at most one thread of execution can be in that area or | ||
50 | be preempted there. | ||
51 | 2) When there is such a thread, pd_busy is set or pd_lock is held by | ||
52 | that thread. | ||
53 | 3) When there is such a thread, ps_tq_active is 0 or ps_spinlock is | ||
54 | held by that thread. | ||
55 | 4) When there is such a thread, all PIA belonging to pd.c have NULL | ||
56 | ->claim_cont or pi_spinlock is held by thread in question. | ||
57 | |||
58 | Proof: consider the first moment when the above is not true. | ||
59 | |||
60 | (1) can become not true if some thread enters that area while another is there. | ||
61 | a) do_pd_request1() can be called from next_request() or do_pd_request() | ||
62 | In the first case the thread was already in the area. In the second, | ||
63 | the thread was holding pd_lock and found pd_busy not set, which would | ||
64 | mean that (2) was already not true. | ||
65 | b) ps_set_intr() and pi_schedule_claimed() can be called only from the | ||
66 | area. | ||
67 | c) pi_do_claimed() is called by pd.c only from the area. | ||
68 | d) ps_tq_int() can enter the area only when the thread is holding | ||
69 | ps_spinlock and ps_tq_active is 1 (due to Lemma 1). It means that | ||
70 | (3) was already not true. | ||
71 | e) do_pd_{read,write}* could be called only from the area. The only | ||
72 | case that needs consideration is call from pi_wake_up() and there | ||
73 | we would have to be called for the PIA that got ->claimed_cont | ||
74 | from pd.c. That could happen only if pi_do_claimed() had been | ||
75 | called from pd.c for that PIA, which happens only for PIA belonging | ||
76 | to pd.c. | ||
77 | f) pi_wake_up() can enter the area only when the thread is holding | ||
78 | pi_spinlock and ->claimed_cont is non-NULL for PIA belonging to | ||
79 | pd.c. It means that (4) was already not true. | ||
80 | |||
81 | (2) can become not true only when pd_lock is released by the thread in question. | ||
82 | Indeed, pd_busy is reset only in the area and thread that resets | ||
83 | it is holding pd_lock. The only place within the area where we | ||
84 | release pd_lock is in pd_next_buf() (called from within the area). | ||
85 | But that code does not reset pd_busy, so pd_busy would have to be | ||
86 | 0 when pd_next_buf() had acquired pd_lock. If it become 0 while | ||
87 | we were acquiring the lock, (1) would be already false, since | ||
88 | the thread that had reset it would be in the area simulateously. | ||
89 | If it was 0 before we tried to acquire pd_lock, (2) would be | ||
90 | already false. | ||
91 | |||
92 | For similar reasons, (3) can become not true only when ps_spinlock is released | ||
93 | by the thread in question. However, all such places within the area are right | ||
94 | after resetting ps_tq_active to 0. | ||
95 | |||
96 | (4) is done the same way - all places where we release pi_spinlock within | ||
97 | the area are either after resetting ->claimed_cont to NULL while holding | ||
98 | pi_spinlock, or after not tocuhing ->claimed_cont since acquiring pi_spinlock | ||
99 | also in the area. The only place where ->claimed_cont is made non-NULL is | ||
100 | in the area, under pi_spinlock and we do not release it until after leaving | ||
101 | the area. | ||
102 | |||
103 | QED. | ||
104 | |||
105 | |||
106 | Corollary 1: ps_tq_active can be killed. Indeed, the only place where we | ||
107 | check its value is in ps_set_intr() and if it had been non-zero at that | ||
108 | point, we would have violated either (2.1) (if it was set while ps_set_intr() | ||
109 | was acquiring ps_spinlock) or (2.3) (if it was set when we started to | ||
110 | acquire ps_spinlock). | ||
111 | |||
112 | Corollary 2: ps_spinlock can be killed. Indeed, Lemma 1 and Lemma 2 show | ||
113 | that the only possible contention is between scheduling ps_tq followed by | ||
114 | immediate release of spinlock and beginning of execution of ps_tq on | ||
115 | another CPU. | ||
116 | |||
117 | Corollary 3: assignment to pd_busy in do_pd_read_start() and do_pd_write_start() | ||
118 | can be killed. Indeed, we are not holding pd_lock and thus pd_busy is already | ||
119 | 1 here. | ||
120 | |||
121 | Corollary 4: in ps_tq_int() uses of con can be replaced with uses of | ||
122 | ps_continuation, since the latter is changed only from the area. | ||
123 | We don't need to reset it to NULL, since we are guaranteed that there | ||
124 | will be a call of ps_set_intr() before we look at ps_continuation again. | ||
125 | We can remove the check for ps_continuation being NULL for the same | ||
126 | reason - the value is guaranteed to be set by the last ps_set_intr() and | ||
127 | we never pass it NULL. Assignements in the beginning of ps_set_intr() | ||
128 | can be taken to callers as long as they remain within the area. | ||