diff options
author | Mauro Carvalho Chehab <mchehab@s-opensource.com> | 2017-05-14 13:23:08 -0400 |
---|---|---|
committer | Jonathan Corbet <corbet@lwn.net> | 2017-07-14 15:51:37 -0400 |
commit | e2862b25dcce61e1d19bfb5cf1840415ef3664e6 (patch) | |
tree | f69aa26e14c5fb49621cecc582fce32f20d3a2a8 /Documentation/hwspinlock.txt | |
parent | 440e4f6d293c5e599013a04a8e3a8ae375bbaf71 (diff) |
hwspinlock.txt: standardize document format
Each text file under Documentation follows a different
format. Some doesn't even have titles!
Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:
- Adjust title markups;
- remove explicit numeration from titles;
- mark literal blocks as such;
- replace _foo_ by **foo** for emphasis;
- adjust whitespaces and add blank lines where needed.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Diffstat (limited to 'Documentation/hwspinlock.txt')
-rw-r--r-- | Documentation/hwspinlock.txt | 527 |
1 files changed, 307 insertions, 220 deletions
diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt index 61c1ee98e59f..ed640a278185 100644 --- a/Documentation/hwspinlock.txt +++ b/Documentation/hwspinlock.txt | |||
@@ -1,6 +1,9 @@ | |||
1 | =========================== | ||
1 | Hardware Spinlock Framework | 2 | Hardware Spinlock Framework |
3 | =========================== | ||
2 | 4 | ||
3 | 1. Introduction | 5 | Introduction |
6 | ============ | ||
4 | 7 | ||
5 | Hardware spinlock modules provide hardware assistance for synchronization | 8 | Hardware spinlock modules provide hardware assistance for synchronization |
6 | and mutual exclusion between heterogeneous processors and those not operating | 9 | and mutual exclusion between heterogeneous processors and those not operating |
@@ -32,286 +35,370 @@ structure). | |||
32 | A common hwspinlock interface makes it possible to have generic, platform- | 35 | A common hwspinlock interface makes it possible to have generic, platform- |
33 | independent, drivers. | 36 | independent, drivers. |
34 | 37 | ||
35 | 2. User API | 38 | User API |
39 | ======== | ||
40 | |||
41 | :: | ||
36 | 42 | ||
37 | struct hwspinlock *hwspin_lock_request(void); | 43 | struct hwspinlock *hwspin_lock_request(void); |
38 | - dynamically assign an hwspinlock and return its address, or NULL | 44 | |
39 | in case an unused hwspinlock isn't available. Users of this | 45 | Dynamically assign an hwspinlock and return its address, or NULL |
40 | API will usually want to communicate the lock's id to the remote core | 46 | in case an unused hwspinlock isn't available. Users of this |
41 | before it can be used to achieve synchronization. | 47 | API will usually want to communicate the lock's id to the remote core |
42 | Should be called from a process context (might sleep). | 48 | before it can be used to achieve synchronization. |
49 | |||
50 | Should be called from a process context (might sleep). | ||
51 | |||
52 | :: | ||
43 | 53 | ||
44 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); | 54 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); |
45 | - assign a specific hwspinlock id and return its address, or NULL | 55 | |
46 | if that hwspinlock is already in use. Usually board code will | 56 | Assign a specific hwspinlock id and return its address, or NULL |
47 | be calling this function in order to reserve specific hwspinlock | 57 | if that hwspinlock is already in use. Usually board code will |
48 | ids for predefined purposes. | 58 | be calling this function in order to reserve specific hwspinlock |
49 | Should be called from a process context (might sleep). | 59 | ids for predefined purposes. |
60 | |||
61 | Should be called from a process context (might sleep). | ||
62 | |||
63 | :: | ||
50 | 64 | ||
51 | int of_hwspin_lock_get_id(struct device_node *np, int index); | 65 | int of_hwspin_lock_get_id(struct device_node *np, int index); |
52 | - retrieve the global lock id for an OF phandle-based specific lock. | 66 | |
53 | This function provides a means for DT users of a hwspinlock module | 67 | Retrieve the global lock id for an OF phandle-based specific lock. |
54 | to get the global lock id of a specific hwspinlock, so that it can | 68 | This function provides a means for DT users of a hwspinlock module |
55 | be requested using the normal hwspin_lock_request_specific() API. | 69 | to get the global lock id of a specific hwspinlock, so that it can |
56 | The function returns a lock id number on success, -EPROBE_DEFER if | 70 | be requested using the normal hwspin_lock_request_specific() API. |
57 | the hwspinlock device is not yet registered with the core, or other | 71 | |
58 | error values. | 72 | The function returns a lock id number on success, -EPROBE_DEFER if |
59 | Should be called from a process context (might sleep). | 73 | the hwspinlock device is not yet registered with the core, or other |
74 | error values. | ||
75 | |||
76 | Should be called from a process context (might sleep). | ||
77 | |||
78 | :: | ||
60 | 79 | ||
61 | int hwspin_lock_free(struct hwspinlock *hwlock); | 80 | int hwspin_lock_free(struct hwspinlock *hwlock); |
62 | - free a previously-assigned hwspinlock; returns 0 on success, or an | 81 | |
63 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock | 82 | Free a previously-assigned hwspinlock; returns 0 on success, or an |
64 | is already free). | 83 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock |
65 | Should be called from a process context (might sleep). | 84 | is already free). |
85 | |||
86 | Should be called from a process context (might sleep). | ||
87 | |||
88 | :: | ||
66 | 89 | ||
67 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); | 90 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); |
68 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | 91 | |
69 | msecs). If the hwspinlock is already taken, the function will busy loop | 92 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
70 | waiting for it to be released, but give up when the timeout elapses. | 93 | msecs). If the hwspinlock is already taken, the function will busy loop |
71 | Upon a successful return from this function, preemption is disabled so | 94 | waiting for it to be released, but give up when the timeout elapses. |
72 | the caller must not sleep, and is advised to release the hwspinlock as | 95 | Upon a successful return from this function, preemption is disabled so |
73 | soon as possible, in order to minimize remote cores polling on the | 96 | the caller must not sleep, and is advised to release the hwspinlock as |
74 | hardware interconnect. | 97 | soon as possible, in order to minimize remote cores polling on the |
75 | Returns 0 when successful and an appropriate error code otherwise (most | 98 | hardware interconnect. |
76 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | 99 | |
77 | The function will never sleep. | 100 | Returns 0 when successful and an appropriate error code otherwise (most |
101 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | ||
102 | The function will never sleep. | ||
103 | |||
104 | :: | ||
78 | 105 | ||
79 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); | 106 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); |
80 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | 107 | |
81 | msecs). If the hwspinlock is already taken, the function will busy loop | 108 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
82 | waiting for it to be released, but give up when the timeout elapses. | 109 | msecs). If the hwspinlock is already taken, the function will busy loop |
83 | Upon a successful return from this function, preemption and the local | 110 | waiting for it to be released, but give up when the timeout elapses. |
84 | interrupts are disabled, so the caller must not sleep, and is advised to | 111 | Upon a successful return from this function, preemption and the local |
85 | release the hwspinlock as soon as possible. | 112 | interrupts are disabled, so the caller must not sleep, and is advised to |
86 | Returns 0 when successful and an appropriate error code otherwise (most | 113 | release the hwspinlock as soon as possible. |
87 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | 114 | |
88 | The function will never sleep. | 115 | Returns 0 when successful and an appropriate error code otherwise (most |
116 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | ||
117 | The function will never sleep. | ||
118 | |||
119 | :: | ||
89 | 120 | ||
90 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, | 121 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, |
91 | unsigned long *flags); | 122 | unsigned long *flags); |
92 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | 123 | |
93 | msecs). If the hwspinlock is already taken, the function will busy loop | 124 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
94 | waiting for it to be released, but give up when the timeout elapses. | 125 | msecs). If the hwspinlock is already taken, the function will busy loop |
95 | Upon a successful return from this function, preemption is disabled, | 126 | waiting for it to be released, but give up when the timeout elapses. |
96 | local interrupts are disabled and their previous state is saved at the | 127 | Upon a successful return from this function, preemption is disabled, |
97 | given flags placeholder. The caller must not sleep, and is advised to | 128 | local interrupts are disabled and their previous state is saved at the |
98 | release the hwspinlock as soon as possible. | 129 | given flags placeholder. The caller must not sleep, and is advised to |
99 | Returns 0 when successful and an appropriate error code otherwise (most | 130 | release the hwspinlock as soon as possible. |
100 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | 131 | |
101 | The function will never sleep. | 132 | Returns 0 when successful and an appropriate error code otherwise (most |
133 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | ||
134 | |||
135 | The function will never sleep. | ||
136 | |||
137 | :: | ||
102 | 138 | ||
103 | int hwspin_trylock(struct hwspinlock *hwlock); | 139 | int hwspin_trylock(struct hwspinlock *hwlock); |
104 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | 140 | |
105 | it is already taken. | 141 | |
106 | Upon a successful return from this function, preemption is disabled so | 142 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
107 | caller must not sleep, and is advised to release the hwspinlock as soon as | 143 | it is already taken. |
108 | possible, in order to minimize remote cores polling on the hardware | 144 | |
109 | interconnect. | 145 | Upon a successful return from this function, preemption is disabled so |
110 | Returns 0 on success and an appropriate error code otherwise (most | 146 | caller must not sleep, and is advised to release the hwspinlock as soon as |
111 | notably -EBUSY if the hwspinlock was already taken). | 147 | possible, in order to minimize remote cores polling on the hardware |
112 | The function will never sleep. | 148 | interconnect. |
149 | |||
150 | Returns 0 on success and an appropriate error code otherwise (most | ||
151 | notably -EBUSY if the hwspinlock was already taken). | ||
152 | The function will never sleep. | ||
153 | |||
154 | :: | ||
113 | 155 | ||
114 | int hwspin_trylock_irq(struct hwspinlock *hwlock); | 156 | int hwspin_trylock_irq(struct hwspinlock *hwlock); |
115 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | 157 | |
116 | it is already taken. | 158 | |
117 | Upon a successful return from this function, preemption and the local | 159 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
118 | interrupts are disabled so caller must not sleep, and is advised to | 160 | it is already taken. |
119 | release the hwspinlock as soon as possible. | 161 | |
120 | Returns 0 on success and an appropriate error code otherwise (most | 162 | Upon a successful return from this function, preemption and the local |
121 | notably -EBUSY if the hwspinlock was already taken). | 163 | interrupts are disabled so caller must not sleep, and is advised to |
122 | The function will never sleep. | 164 | release the hwspinlock as soon as possible. |
165 | |||
166 | Returns 0 on success and an appropriate error code otherwise (most | ||
167 | notably -EBUSY if the hwspinlock was already taken). | ||
168 | |||
169 | The function will never sleep. | ||
170 | |||
171 | :: | ||
123 | 172 | ||
124 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); | 173 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); |
125 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | 174 | |
126 | it is already taken. | 175 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
127 | Upon a successful return from this function, preemption is disabled, | 176 | it is already taken. |
128 | the local interrupts are disabled and their previous state is saved | 177 | |
129 | at the given flags placeholder. The caller must not sleep, and is advised | 178 | Upon a successful return from this function, preemption is disabled, |
130 | to release the hwspinlock as soon as possible. | 179 | the local interrupts are disabled and their previous state is saved |
131 | Returns 0 on success and an appropriate error code otherwise (most | 180 | at the given flags placeholder. The caller must not sleep, and is advised |
132 | notably -EBUSY if the hwspinlock was already taken). | 181 | to release the hwspinlock as soon as possible. |
133 | The function will never sleep. | 182 | |
183 | Returns 0 on success and an appropriate error code otherwise (most | ||
184 | notably -EBUSY if the hwspinlock was already taken). | ||
185 | The function will never sleep. | ||
186 | |||
187 | :: | ||
134 | 188 | ||
135 | void hwspin_unlock(struct hwspinlock *hwlock); | 189 | void hwspin_unlock(struct hwspinlock *hwlock); |
136 | - unlock a previously-locked hwspinlock. Always succeed, and can be called | 190 | |
137 | from any context (the function never sleeps). Note: code should _never_ | 191 | Unlock a previously-locked hwspinlock. Always succeed, and can be called |
138 | unlock an hwspinlock which is already unlocked (there is no protection | 192 | from any context (the function never sleeps). |
139 | against this). | 193 | |
194 | .. note:: | ||
195 | |||
196 | code should **never** unlock an hwspinlock which is already unlocked | ||
197 | (there is no protection against this). | ||
198 | |||
199 | :: | ||
140 | 200 | ||
141 | void hwspin_unlock_irq(struct hwspinlock *hwlock); | 201 | void hwspin_unlock_irq(struct hwspinlock *hwlock); |
142 | - unlock a previously-locked hwspinlock and enable local interrupts. | 202 | |
143 | The caller should _never_ unlock an hwspinlock which is already unlocked. | 203 | Unlock a previously-locked hwspinlock and enable local interrupts. |
144 | Doing so is considered a bug (there is no protection against this). | 204 | The caller should **never** unlock an hwspinlock which is already unlocked. |
145 | Upon a successful return from this function, preemption and local | 205 | |
146 | interrupts are enabled. This function will never sleep. | 206 | Doing so is considered a bug (there is no protection against this). |
207 | Upon a successful return from this function, preemption and local | ||
208 | interrupts are enabled. This function will never sleep. | ||
209 | |||
210 | :: | ||
147 | 211 | ||
148 | void | 212 | void |
149 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); | 213 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); |
150 | - unlock a previously-locked hwspinlock. | 214 | |
151 | The caller should _never_ unlock an hwspinlock which is already unlocked. | 215 | Unlock a previously-locked hwspinlock. |
152 | Doing so is considered a bug (there is no protection against this). | 216 | |
153 | Upon a successful return from this function, preemption is reenabled, | 217 | The caller should **never** unlock an hwspinlock which is already unlocked. |
154 | and the state of the local interrupts is restored to the state saved at | 218 | Doing so is considered a bug (there is no protection against this). |
155 | the given flags. This function will never sleep. | 219 | Upon a successful return from this function, preemption is reenabled, |
220 | and the state of the local interrupts is restored to the state saved at | ||
221 | the given flags. This function will never sleep. | ||
222 | |||
223 | :: | ||
156 | 224 | ||
157 | int hwspin_lock_get_id(struct hwspinlock *hwlock); | 225 | int hwspin_lock_get_id(struct hwspinlock *hwlock); |
158 | - retrieve id number of a given hwspinlock. This is needed when an | ||
159 | hwspinlock is dynamically assigned: before it can be used to achieve | ||
160 | mutual exclusion with a remote cpu, the id number should be communicated | ||
161 | to the remote task with which we want to synchronize. | ||
162 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. | ||
163 | |||
164 | 3. Typical usage | ||
165 | |||
166 | #include <linux/hwspinlock.h> | ||
167 | #include <linux/err.h> | ||
168 | |||
169 | int hwspinlock_example1(void) | ||
170 | { | ||
171 | struct hwspinlock *hwlock; | ||
172 | int ret; | ||
173 | |||
174 | /* dynamically assign a hwspinlock */ | ||
175 | hwlock = hwspin_lock_request(); | ||
176 | if (!hwlock) | ||
177 | ... | ||
178 | |||
179 | id = hwspin_lock_get_id(hwlock); | ||
180 | /* probably need to communicate id to a remote processor now */ | ||
181 | |||
182 | /* take the lock, spin for 1 sec if it's already taken */ | ||
183 | ret = hwspin_lock_timeout(hwlock, 1000); | ||
184 | if (ret) | ||
185 | ... | ||
186 | |||
187 | /* | ||
188 | * we took the lock, do our thing now, but do NOT sleep | ||
189 | */ | ||
190 | |||
191 | /* release the lock */ | ||
192 | hwspin_unlock(hwlock); | ||
193 | |||
194 | /* free the lock */ | ||
195 | ret = hwspin_lock_free(hwlock); | ||
196 | if (ret) | ||
197 | ... | ||
198 | |||
199 | return ret; | ||
200 | } | ||
201 | |||
202 | int hwspinlock_example2(void) | ||
203 | { | ||
204 | struct hwspinlock *hwlock; | ||
205 | int ret; | ||
206 | |||
207 | /* | ||
208 | * assign a specific hwspinlock id - this should be called early | ||
209 | * by board init code. | ||
210 | */ | ||
211 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); | ||
212 | if (!hwlock) | ||
213 | ... | ||
214 | |||
215 | /* try to take it, but don't spin on it */ | ||
216 | ret = hwspin_trylock(hwlock); | ||
217 | if (!ret) { | ||
218 | pr_info("lock is already taken\n"); | ||
219 | return -EBUSY; | ||
220 | } | ||
221 | 226 | ||
222 | /* | 227 | Retrieve id number of a given hwspinlock. This is needed when an |
223 | * we took the lock, do our thing now, but do NOT sleep | 228 | hwspinlock is dynamically assigned: before it can be used to achieve |
224 | */ | 229 | mutual exclusion with a remote cpu, the id number should be communicated |
230 | to the remote task with which we want to synchronize. | ||
231 | |||
232 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. | ||
233 | |||
234 | Typical usage | ||
235 | ============= | ||
225 | 236 | ||
226 | /* release the lock */ | 237 | :: |
227 | hwspin_unlock(hwlock); | ||
228 | 238 | ||
229 | /* free the lock */ | 239 | #include <linux/hwspinlock.h> |
230 | ret = hwspin_lock_free(hwlock); | 240 | #include <linux/err.h> |
231 | if (ret) | ||
232 | ... | ||
233 | 241 | ||
234 | return ret; | 242 | int hwspinlock_example1(void) |
235 | } | 243 | { |
244 | struct hwspinlock *hwlock; | ||
245 | int ret; | ||
236 | 246 | ||
247 | /* dynamically assign a hwspinlock */ | ||
248 | hwlock = hwspin_lock_request(); | ||
249 | if (!hwlock) | ||
250 | ... | ||
237 | 251 | ||
238 | 4. API for implementors | 252 | id = hwspin_lock_get_id(hwlock); |
253 | /* probably need to communicate id to a remote processor now */ | ||
254 | |||
255 | /* take the lock, spin for 1 sec if it's already taken */ | ||
256 | ret = hwspin_lock_timeout(hwlock, 1000); | ||
257 | if (ret) | ||
258 | ... | ||
259 | |||
260 | /* | ||
261 | * we took the lock, do our thing now, but do NOT sleep | ||
262 | */ | ||
263 | |||
264 | /* release the lock */ | ||
265 | hwspin_unlock(hwlock); | ||
266 | |||
267 | /* free the lock */ | ||
268 | ret = hwspin_lock_free(hwlock); | ||
269 | if (ret) | ||
270 | ... | ||
271 | |||
272 | return ret; | ||
273 | } | ||
274 | |||
275 | int hwspinlock_example2(void) | ||
276 | { | ||
277 | struct hwspinlock *hwlock; | ||
278 | int ret; | ||
279 | |||
280 | /* | ||
281 | * assign a specific hwspinlock id - this should be called early | ||
282 | * by board init code. | ||
283 | */ | ||
284 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); | ||
285 | if (!hwlock) | ||
286 | ... | ||
287 | |||
288 | /* try to take it, but don't spin on it */ | ||
289 | ret = hwspin_trylock(hwlock); | ||
290 | if (!ret) { | ||
291 | pr_info("lock is already taken\n"); | ||
292 | return -EBUSY; | ||
293 | } | ||
294 | |||
295 | /* | ||
296 | * we took the lock, do our thing now, but do NOT sleep | ||
297 | */ | ||
298 | |||
299 | /* release the lock */ | ||
300 | hwspin_unlock(hwlock); | ||
301 | |||
302 | /* free the lock */ | ||
303 | ret = hwspin_lock_free(hwlock); | ||
304 | if (ret) | ||
305 | ... | ||
306 | |||
307 | return ret; | ||
308 | } | ||
309 | |||
310 | |||
311 | API for implementors | ||
312 | ==================== | ||
313 | |||
314 | :: | ||
239 | 315 | ||
240 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, | 316 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, |
241 | const struct hwspinlock_ops *ops, int base_id, int num_locks); | 317 | const struct hwspinlock_ops *ops, int base_id, int num_locks); |
242 | - to be called from the underlying platform-specific implementation, in | 318 | |
243 | order to register a new hwspinlock device (which is usually a bank of | 319 | To be called from the underlying platform-specific implementation, in |
244 | numerous locks). Should be called from a process context (this function | 320 | order to register a new hwspinlock device (which is usually a bank of |
245 | might sleep). | 321 | numerous locks). Should be called from a process context (this function |
246 | Returns 0 on success, or appropriate error code on failure. | 322 | might sleep). |
323 | |||
324 | Returns 0 on success, or appropriate error code on failure. | ||
325 | |||
326 | :: | ||
247 | 327 | ||
248 | int hwspin_lock_unregister(struct hwspinlock_device *bank); | 328 | int hwspin_lock_unregister(struct hwspinlock_device *bank); |
249 | - to be called from the underlying vendor-specific implementation, in order | ||
250 | to unregister an hwspinlock device (which is usually a bank of numerous | ||
251 | locks). | ||
252 | Should be called from a process context (this function might sleep). | ||
253 | Returns the address of hwspinlock on success, or NULL on error (e.g. | ||
254 | if the hwspinlock is still in use). | ||
255 | 329 | ||
256 | 5. Important structs | 330 | To be called from the underlying vendor-specific implementation, in order |
331 | to unregister an hwspinlock device (which is usually a bank of numerous | ||
332 | locks). | ||
333 | |||
334 | Should be called from a process context (this function might sleep). | ||
335 | |||
336 | Returns the address of hwspinlock on success, or NULL on error (e.g. | ||
337 | if the hwspinlock is still in use). | ||
338 | |||
339 | Important structs | ||
340 | ================= | ||
257 | 341 | ||
258 | struct hwspinlock_device is a device which usually contains a bank | 342 | struct hwspinlock_device is a device which usually contains a bank |
259 | of hardware locks. It is registered by the underlying hwspinlock | 343 | of hardware locks. It is registered by the underlying hwspinlock |
260 | implementation using the hwspin_lock_register() API. | 344 | implementation using the hwspin_lock_register() API. |
261 | 345 | ||
262 | /** | 346 | :: |
263 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks | 347 | |
264 | * @dev: underlying device, will be used to invoke runtime PM api | 348 | /** |
265 | * @ops: platform-specific hwspinlock handlers | 349 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks |
266 | * @base_id: id index of the first lock in this device | 350 | * @dev: underlying device, will be used to invoke runtime PM api |
267 | * @num_locks: number of locks in this device | 351 | * @ops: platform-specific hwspinlock handlers |
268 | * @lock: dynamically allocated array of 'struct hwspinlock' | 352 | * @base_id: id index of the first lock in this device |
269 | */ | 353 | * @num_locks: number of locks in this device |
270 | struct hwspinlock_device { | 354 | * @lock: dynamically allocated array of 'struct hwspinlock' |
271 | struct device *dev; | 355 | */ |
272 | const struct hwspinlock_ops *ops; | 356 | struct hwspinlock_device { |
273 | int base_id; | 357 | struct device *dev; |
274 | int num_locks; | 358 | const struct hwspinlock_ops *ops; |
275 | struct hwspinlock lock[0]; | 359 | int base_id; |
276 | }; | 360 | int num_locks; |
361 | struct hwspinlock lock[0]; | ||
362 | }; | ||
277 | 363 | ||
278 | struct hwspinlock_device contains an array of hwspinlock structs, each | 364 | struct hwspinlock_device contains an array of hwspinlock structs, each |
279 | of which represents a single hardware lock: | 365 | of which represents a single hardware lock:: |
280 | 366 | ||
281 | /** | 367 | /** |
282 | * struct hwspinlock - this struct represents a single hwspinlock instance | 368 | * struct hwspinlock - this struct represents a single hwspinlock instance |
283 | * @bank: the hwspinlock_device structure which owns this lock | 369 | * @bank: the hwspinlock_device structure which owns this lock |
284 | * @lock: initialized and used by hwspinlock core | 370 | * @lock: initialized and used by hwspinlock core |
285 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv | 371 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv |
286 | */ | 372 | */ |
287 | struct hwspinlock { | 373 | struct hwspinlock { |
288 | struct hwspinlock_device *bank; | 374 | struct hwspinlock_device *bank; |
289 | spinlock_t lock; | 375 | spinlock_t lock; |
290 | void *priv; | 376 | void *priv; |
291 | }; | 377 | }; |
292 | 378 | ||
293 | When registering a bank of locks, the hwspinlock driver only needs to | 379 | When registering a bank of locks, the hwspinlock driver only needs to |
294 | set the priv members of the locks. The rest of the members are set and | 380 | set the priv members of the locks. The rest of the members are set and |
295 | initialized by the hwspinlock core itself. | 381 | initialized by the hwspinlock core itself. |
296 | 382 | ||
297 | 6. Implementation callbacks | 383 | Implementation callbacks |
384 | ======================== | ||
298 | 385 | ||
299 | There are three possible callbacks defined in 'struct hwspinlock_ops': | 386 | There are three possible callbacks defined in 'struct hwspinlock_ops':: |
300 | 387 | ||
301 | struct hwspinlock_ops { | 388 | struct hwspinlock_ops { |
302 | int (*trylock)(struct hwspinlock *lock); | 389 | int (*trylock)(struct hwspinlock *lock); |
303 | void (*unlock)(struct hwspinlock *lock); | 390 | void (*unlock)(struct hwspinlock *lock); |
304 | void (*relax)(struct hwspinlock *lock); | 391 | void (*relax)(struct hwspinlock *lock); |
305 | }; | 392 | }; |
306 | 393 | ||
307 | The first two callbacks are mandatory: | 394 | The first two callbacks are mandatory: |
308 | 395 | ||
309 | The ->trylock() callback should make a single attempt to take the lock, and | 396 | The ->trylock() callback should make a single attempt to take the lock, and |
310 | return 0 on failure and 1 on success. This callback may _not_ sleep. | 397 | return 0 on failure and 1 on success. This callback may **not** sleep. |
311 | 398 | ||
312 | The ->unlock() callback releases the lock. It always succeed, and it, too, | 399 | The ->unlock() callback releases the lock. It always succeed, and it, too, |
313 | may _not_ sleep. | 400 | may **not** sleep. |
314 | 401 | ||
315 | The ->relax() callback is optional. It is called by hwspinlock core while | 402 | The ->relax() callback is optional. It is called by hwspinlock core while |
316 | spinning on a lock, and can be used by the underlying implementation to force | 403 | spinning on a lock, and can be used by the underlying implementation to force |
317 | a delay between two successive invocations of ->trylock(). It may _not_ sleep. | 404 | a delay between two successive invocations of ->trylock(). It may **not** sleep. |