summaryrefslogtreecommitdiffstats
path: root/Documentation/hwspinlock.txt
diff options
context:
space:
mode:
authorMauro Carvalho Chehab <mchehab@s-opensource.com>2017-05-14 13:23:08 -0400
committerJonathan Corbet <corbet@lwn.net>2017-07-14 15:51:37 -0400
commite2862b25dcce61e1d19bfb5cf1840415ef3664e6 (patch)
treef69aa26e14c5fb49621cecc582fce32f20d3a2a8 /Documentation/hwspinlock.txt
parent440e4f6d293c5e599013a04a8e3a8ae375bbaf71 (diff)
hwspinlock.txt: standardize document format
Each text file under Documentation follows a different format. Some doesn't even have titles! Change its representation to follow the adopted standard, using ReST markups for it to be parseable by Sphinx: - Adjust title markups; - remove explicit numeration from titles; - mark literal blocks as such; - replace _foo_ by **foo** for emphasis; - adjust whitespaces and add blank lines where needed. Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Diffstat (limited to 'Documentation/hwspinlock.txt')
-rw-r--r--Documentation/hwspinlock.txt527
1 files changed, 307 insertions, 220 deletions
diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt
index 61c1ee98e59f..ed640a278185 100644
--- a/Documentation/hwspinlock.txt
+++ b/Documentation/hwspinlock.txt
@@ -1,6 +1,9 @@
1===========================
1Hardware Spinlock Framework 2Hardware Spinlock Framework
3===========================
2 4
31. Introduction 5Introduction
6============
4 7
5Hardware spinlock modules provide hardware assistance for synchronization 8Hardware spinlock modules provide hardware assistance for synchronization
6and mutual exclusion between heterogeneous processors and those not operating 9and mutual exclusion between heterogeneous processors and those not operating
@@ -32,286 +35,370 @@ structure).
32A common hwspinlock interface makes it possible to have generic, platform- 35A common hwspinlock interface makes it possible to have generic, platform-
33independent, drivers. 36independent, drivers.
34 37
352. User API 38User API
39========
40
41::
36 42
37 struct hwspinlock *hwspin_lock_request(void); 43 struct hwspinlock *hwspin_lock_request(void);
38 - dynamically assign an hwspinlock and return its address, or NULL 44
39 in case an unused hwspinlock isn't available. Users of this 45Dynamically assign an hwspinlock and return its address, or NULL
40 API will usually want to communicate the lock's id to the remote core 46in case an unused hwspinlock isn't available. Users of this
41 before it can be used to achieve synchronization. 47API will usually want to communicate the lock's id to the remote core
42 Should be called from a process context (might sleep). 48before it can be used to achieve synchronization.
49
50Should be called from a process context (might sleep).
51
52::
43 53
44 struct hwspinlock *hwspin_lock_request_specific(unsigned int id); 54 struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
45 - assign a specific hwspinlock id and return its address, or NULL 55
46 if that hwspinlock is already in use. Usually board code will 56Assign a specific hwspinlock id and return its address, or NULL
47 be calling this function in order to reserve specific hwspinlock 57if that hwspinlock is already in use. Usually board code will
48 ids for predefined purposes. 58be calling this function in order to reserve specific hwspinlock
49 Should be called from a process context (might sleep). 59ids for predefined purposes.
60
61Should be called from a process context (might sleep).
62
63::
50 64
51 int of_hwspin_lock_get_id(struct device_node *np, int index); 65 int of_hwspin_lock_get_id(struct device_node *np, int index);
52 - retrieve the global lock id for an OF phandle-based specific lock. 66
53 This function provides a means for DT users of a hwspinlock module 67Retrieve the global lock id for an OF phandle-based specific lock.
54 to get the global lock id of a specific hwspinlock, so that it can 68This function provides a means for DT users of a hwspinlock module
55 be requested using the normal hwspin_lock_request_specific() API. 69to get the global lock id of a specific hwspinlock, so that it can
56 The function returns a lock id number on success, -EPROBE_DEFER if 70be requested using the normal hwspin_lock_request_specific() API.
57 the hwspinlock device is not yet registered with the core, or other 71
58 error values. 72The function returns a lock id number on success, -EPROBE_DEFER if
59 Should be called from a process context (might sleep). 73the hwspinlock device is not yet registered with the core, or other
74error values.
75
76Should be called from a process context (might sleep).
77
78::
60 79
61 int hwspin_lock_free(struct hwspinlock *hwlock); 80 int hwspin_lock_free(struct hwspinlock *hwlock);
62 - free a previously-assigned hwspinlock; returns 0 on success, or an 81
63 appropriate error code on failure (e.g. -EINVAL if the hwspinlock 82Free a previously-assigned hwspinlock; returns 0 on success, or an
64 is already free). 83appropriate error code on failure (e.g. -EINVAL if the hwspinlock
65 Should be called from a process context (might sleep). 84is already free).
85
86Should be called from a process context (might sleep).
87
88::
66 89
67 int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); 90 int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
68 - lock a previously-assigned hwspinlock with a timeout limit (specified in 91
69 msecs). If the hwspinlock is already taken, the function will busy loop 92Lock a previously-assigned hwspinlock with a timeout limit (specified in
70 waiting for it to be released, but give up when the timeout elapses. 93msecs). If the hwspinlock is already taken, the function will busy loop
71 Upon a successful return from this function, preemption is disabled so 94waiting for it to be released, but give up when the timeout elapses.
72 the caller must not sleep, and is advised to release the hwspinlock as 95Upon a successful return from this function, preemption is disabled so
73 soon as possible, in order to minimize remote cores polling on the 96the caller must not sleep, and is advised to release the hwspinlock as
74 hardware interconnect. 97soon as possible, in order to minimize remote cores polling on the
75 Returns 0 when successful and an appropriate error code otherwise (most 98hardware interconnect.
76 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). 99
77 The function will never sleep. 100Returns 0 when successful and an appropriate error code otherwise (most
101notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
102The function will never sleep.
103
104::
78 105
79 int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); 106 int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
80 - lock a previously-assigned hwspinlock with a timeout limit (specified in 107
81 msecs). If the hwspinlock is already taken, the function will busy loop 108Lock a previously-assigned hwspinlock with a timeout limit (specified in
82 waiting for it to be released, but give up when the timeout elapses. 109msecs). If the hwspinlock is already taken, the function will busy loop
83 Upon a successful return from this function, preemption and the local 110waiting for it to be released, but give up when the timeout elapses.
84 interrupts are disabled, so the caller must not sleep, and is advised to 111Upon a successful return from this function, preemption and the local
85 release the hwspinlock as soon as possible. 112interrupts are disabled, so the caller must not sleep, and is advised to
86 Returns 0 when successful and an appropriate error code otherwise (most 113release the hwspinlock as soon as possible.
87 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). 114
88 The function will never sleep. 115Returns 0 when successful and an appropriate error code otherwise (most
116notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
117The function will never sleep.
118
119::
89 120
90 int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, 121 int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
91 unsigned long *flags); 122 unsigned long *flags);
92 - lock a previously-assigned hwspinlock with a timeout limit (specified in 123
93 msecs). If the hwspinlock is already taken, the function will busy loop 124Lock a previously-assigned hwspinlock with a timeout limit (specified in
94 waiting for it to be released, but give up when the timeout elapses. 125msecs). If the hwspinlock is already taken, the function will busy loop
95 Upon a successful return from this function, preemption is disabled, 126waiting for it to be released, but give up when the timeout elapses.
96 local interrupts are disabled and their previous state is saved at the 127Upon a successful return from this function, preemption is disabled,
97 given flags placeholder. The caller must not sleep, and is advised to 128local interrupts are disabled and their previous state is saved at the
98 release the hwspinlock as soon as possible. 129given flags placeholder. The caller must not sleep, and is advised to
99 Returns 0 when successful and an appropriate error code otherwise (most 130release the hwspinlock as soon as possible.
100 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). 131
101 The function will never sleep. 132Returns 0 when successful and an appropriate error code otherwise (most
133notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
134
135The function will never sleep.
136
137::
102 138
103 int hwspin_trylock(struct hwspinlock *hwlock); 139 int hwspin_trylock(struct hwspinlock *hwlock);
104 - attempt to lock a previously-assigned hwspinlock, but immediately fail if 140
105 it is already taken. 141
106 Upon a successful return from this function, preemption is disabled so 142Attempt to lock a previously-assigned hwspinlock, but immediately fail if
107 caller must not sleep, and is advised to release the hwspinlock as soon as 143it is already taken.
108 possible, in order to minimize remote cores polling on the hardware 144
109 interconnect. 145Upon a successful return from this function, preemption is disabled so
110 Returns 0 on success and an appropriate error code otherwise (most 146caller must not sleep, and is advised to release the hwspinlock as soon as
111 notably -EBUSY if the hwspinlock was already taken). 147possible, in order to minimize remote cores polling on the hardware
112 The function will never sleep. 148interconnect.
149
150Returns 0 on success and an appropriate error code otherwise (most
151notably -EBUSY if the hwspinlock was already taken).
152The function will never sleep.
153
154::
113 155
114 int hwspin_trylock_irq(struct hwspinlock *hwlock); 156 int hwspin_trylock_irq(struct hwspinlock *hwlock);
115 - attempt to lock a previously-assigned hwspinlock, but immediately fail if 157
116 it is already taken. 158
117 Upon a successful return from this function, preemption and the local 159Attempt to lock a previously-assigned hwspinlock, but immediately fail if
118 interrupts are disabled so caller must not sleep, and is advised to 160it is already taken.
119 release the hwspinlock as soon as possible. 161
120 Returns 0 on success and an appropriate error code otherwise (most 162Upon a successful return from this function, preemption and the local
121 notably -EBUSY if the hwspinlock was already taken). 163interrupts are disabled so caller must not sleep, and is advised to
122 The function will never sleep. 164release the hwspinlock as soon as possible.
165
166Returns 0 on success and an appropriate error code otherwise (most
167notably -EBUSY if the hwspinlock was already taken).
168
169The function will never sleep.
170
171::
123 172
124 int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); 173 int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
125 - attempt to lock a previously-assigned hwspinlock, but immediately fail if 174
126 it is already taken. 175Attempt to lock a previously-assigned hwspinlock, but immediately fail if
127 Upon a successful return from this function, preemption is disabled, 176it is already taken.
128 the local interrupts are disabled and their previous state is saved 177
129 at the given flags placeholder. The caller must not sleep, and is advised 178Upon a successful return from this function, preemption is disabled,
130 to release the hwspinlock as soon as possible. 179the local interrupts are disabled and their previous state is saved
131 Returns 0 on success and an appropriate error code otherwise (most 180at the given flags placeholder. The caller must not sleep, and is advised
132 notably -EBUSY if the hwspinlock was already taken). 181to release the hwspinlock as soon as possible.
133 The function will never sleep. 182
183Returns 0 on success and an appropriate error code otherwise (most
184notably -EBUSY if the hwspinlock was already taken).
185The function will never sleep.
186
187::
134 188
135 void hwspin_unlock(struct hwspinlock *hwlock); 189 void hwspin_unlock(struct hwspinlock *hwlock);
136 - unlock a previously-locked hwspinlock. Always succeed, and can be called 190
137 from any context (the function never sleeps). Note: code should _never_ 191Unlock a previously-locked hwspinlock. Always succeed, and can be called
138 unlock an hwspinlock which is already unlocked (there is no protection 192from any context (the function never sleeps).
139 against this). 193
194.. note::
195
196 code should **never** unlock an hwspinlock which is already unlocked
197 (there is no protection against this).
198
199::
140 200
141 void hwspin_unlock_irq(struct hwspinlock *hwlock); 201 void hwspin_unlock_irq(struct hwspinlock *hwlock);
142 - unlock a previously-locked hwspinlock and enable local interrupts. 202
143 The caller should _never_ unlock an hwspinlock which is already unlocked. 203Unlock a previously-locked hwspinlock and enable local interrupts.
144 Doing so is considered a bug (there is no protection against this). 204The caller should **never** unlock an hwspinlock which is already unlocked.
145 Upon a successful return from this function, preemption and local 205
146 interrupts are enabled. This function will never sleep. 206Doing so is considered a bug (there is no protection against this).
207Upon a successful return from this function, preemption and local
208interrupts are enabled. This function will never sleep.
209
210::
147 211
148 void 212 void
149 hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); 213 hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
150 - unlock a previously-locked hwspinlock. 214
151 The caller should _never_ unlock an hwspinlock which is already unlocked. 215Unlock a previously-locked hwspinlock.
152 Doing so is considered a bug (there is no protection against this). 216
153 Upon a successful return from this function, preemption is reenabled, 217The caller should **never** unlock an hwspinlock which is already unlocked.
154 and the state of the local interrupts is restored to the state saved at 218Doing so is considered a bug (there is no protection against this).
155 the given flags. This function will never sleep. 219Upon a successful return from this function, preemption is reenabled,
220and the state of the local interrupts is restored to the state saved at
221the given flags. This function will never sleep.
222
223::
156 224
157 int hwspin_lock_get_id(struct hwspinlock *hwlock); 225 int hwspin_lock_get_id(struct hwspinlock *hwlock);
158 - retrieve id number of a given hwspinlock. This is needed when an
159 hwspinlock is dynamically assigned: before it can be used to achieve
160 mutual exclusion with a remote cpu, the id number should be communicated
161 to the remote task with which we want to synchronize.
162 Returns the hwspinlock id number, or -EINVAL if hwlock is null.
163
1643. Typical usage
165
166#include <linux/hwspinlock.h>
167#include <linux/err.h>
168
169int hwspinlock_example1(void)
170{
171 struct hwspinlock *hwlock;
172 int ret;
173
174 /* dynamically assign a hwspinlock */
175 hwlock = hwspin_lock_request();
176 if (!hwlock)
177 ...
178
179 id = hwspin_lock_get_id(hwlock);
180 /* probably need to communicate id to a remote processor now */
181
182 /* take the lock, spin for 1 sec if it's already taken */
183 ret = hwspin_lock_timeout(hwlock, 1000);
184 if (ret)
185 ...
186
187 /*
188 * we took the lock, do our thing now, but do NOT sleep
189 */
190
191 /* release the lock */
192 hwspin_unlock(hwlock);
193
194 /* free the lock */
195 ret = hwspin_lock_free(hwlock);
196 if (ret)
197 ...
198
199 return ret;
200}
201
202int hwspinlock_example2(void)
203{
204 struct hwspinlock *hwlock;
205 int ret;
206
207 /*
208 * assign a specific hwspinlock id - this should be called early
209 * by board init code.
210 */
211 hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
212 if (!hwlock)
213 ...
214
215 /* try to take it, but don't spin on it */
216 ret = hwspin_trylock(hwlock);
217 if (!ret) {
218 pr_info("lock is already taken\n");
219 return -EBUSY;
220 }
221 226
222 /* 227Retrieve id number of a given hwspinlock. This is needed when an
223 * we took the lock, do our thing now, but do NOT sleep 228hwspinlock is dynamically assigned: before it can be used to achieve
224 */ 229mutual exclusion with a remote cpu, the id number should be communicated
230to the remote task with which we want to synchronize.
231
232Returns the hwspinlock id number, or -EINVAL if hwlock is null.
233
234Typical usage
235=============
225 236
226 /* release the lock */ 237::
227 hwspin_unlock(hwlock);
228 238
229 /* free the lock */ 239 #include <linux/hwspinlock.h>
230 ret = hwspin_lock_free(hwlock); 240 #include <linux/err.h>
231 if (ret)
232 ...
233 241
234 return ret; 242 int hwspinlock_example1(void)
235} 243 {
244 struct hwspinlock *hwlock;
245 int ret;
236 246
247 /* dynamically assign a hwspinlock */
248 hwlock = hwspin_lock_request();
249 if (!hwlock)
250 ...
237 251
2384. API for implementors 252 id = hwspin_lock_get_id(hwlock);
253 /* probably need to communicate id to a remote processor now */
254
255 /* take the lock, spin for 1 sec if it's already taken */
256 ret = hwspin_lock_timeout(hwlock, 1000);
257 if (ret)
258 ...
259
260 /*
261 * we took the lock, do our thing now, but do NOT sleep
262 */
263
264 /* release the lock */
265 hwspin_unlock(hwlock);
266
267 /* free the lock */
268 ret = hwspin_lock_free(hwlock);
269 if (ret)
270 ...
271
272 return ret;
273 }
274
275 int hwspinlock_example2(void)
276 {
277 struct hwspinlock *hwlock;
278 int ret;
279
280 /*
281 * assign a specific hwspinlock id - this should be called early
282 * by board init code.
283 */
284 hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
285 if (!hwlock)
286 ...
287
288 /* try to take it, but don't spin on it */
289 ret = hwspin_trylock(hwlock);
290 if (!ret) {
291 pr_info("lock is already taken\n");
292 return -EBUSY;
293 }
294
295 /*
296 * we took the lock, do our thing now, but do NOT sleep
297 */
298
299 /* release the lock */
300 hwspin_unlock(hwlock);
301
302 /* free the lock */
303 ret = hwspin_lock_free(hwlock);
304 if (ret)
305 ...
306
307 return ret;
308 }
309
310
311API for implementors
312====================
313
314::
239 315
240 int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, 316 int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
241 const struct hwspinlock_ops *ops, int base_id, int num_locks); 317 const struct hwspinlock_ops *ops, int base_id, int num_locks);
242 - to be called from the underlying platform-specific implementation, in 318
243 order to register a new hwspinlock device (which is usually a bank of 319To be called from the underlying platform-specific implementation, in
244 numerous locks). Should be called from a process context (this function 320order to register a new hwspinlock device (which is usually a bank of
245 might sleep). 321numerous locks). Should be called from a process context (this function
246 Returns 0 on success, or appropriate error code on failure. 322might sleep).
323
324Returns 0 on success, or appropriate error code on failure.
325
326::
247 327
248 int hwspin_lock_unregister(struct hwspinlock_device *bank); 328 int hwspin_lock_unregister(struct hwspinlock_device *bank);
249 - to be called from the underlying vendor-specific implementation, in order
250 to unregister an hwspinlock device (which is usually a bank of numerous
251 locks).
252 Should be called from a process context (this function might sleep).
253 Returns the address of hwspinlock on success, or NULL on error (e.g.
254 if the hwspinlock is still in use).
255 329
2565. Important structs 330To be called from the underlying vendor-specific implementation, in order
331to unregister an hwspinlock device (which is usually a bank of numerous
332locks).
333
334Should be called from a process context (this function might sleep).
335
336Returns the address of hwspinlock on success, or NULL on error (e.g.
337if the hwspinlock is still in use).
338
339Important structs
340=================
257 341
258struct hwspinlock_device is a device which usually contains a bank 342struct hwspinlock_device is a device which usually contains a bank
259of hardware locks. It is registered by the underlying hwspinlock 343of hardware locks. It is registered by the underlying hwspinlock
260implementation using the hwspin_lock_register() API. 344implementation using the hwspin_lock_register() API.
261 345
262/** 346::
263 * struct hwspinlock_device - a device which usually spans numerous hwspinlocks 347
264 * @dev: underlying device, will be used to invoke runtime PM api 348 /**
265 * @ops: platform-specific hwspinlock handlers 349 * struct hwspinlock_device - a device which usually spans numerous hwspinlocks
266 * @base_id: id index of the first lock in this device 350 * @dev: underlying device, will be used to invoke runtime PM api
267 * @num_locks: number of locks in this device 351 * @ops: platform-specific hwspinlock handlers
268 * @lock: dynamically allocated array of 'struct hwspinlock' 352 * @base_id: id index of the first lock in this device
269 */ 353 * @num_locks: number of locks in this device
270struct hwspinlock_device { 354 * @lock: dynamically allocated array of 'struct hwspinlock'
271 struct device *dev; 355 */
272 const struct hwspinlock_ops *ops; 356 struct hwspinlock_device {
273 int base_id; 357 struct device *dev;
274 int num_locks; 358 const struct hwspinlock_ops *ops;
275 struct hwspinlock lock[0]; 359 int base_id;
276}; 360 int num_locks;
361 struct hwspinlock lock[0];
362 };
277 363
278struct hwspinlock_device contains an array of hwspinlock structs, each 364struct hwspinlock_device contains an array of hwspinlock structs, each
279of which represents a single hardware lock: 365of which represents a single hardware lock::
280 366
281/** 367 /**
282 * struct hwspinlock - this struct represents a single hwspinlock instance 368 * struct hwspinlock - this struct represents a single hwspinlock instance
283 * @bank: the hwspinlock_device structure which owns this lock 369 * @bank: the hwspinlock_device structure which owns this lock
284 * @lock: initialized and used by hwspinlock core 370 * @lock: initialized and used by hwspinlock core
285 * @priv: private data, owned by the underlying platform-specific hwspinlock drv 371 * @priv: private data, owned by the underlying platform-specific hwspinlock drv
286 */ 372 */
287struct hwspinlock { 373 struct hwspinlock {
288 struct hwspinlock_device *bank; 374 struct hwspinlock_device *bank;
289 spinlock_t lock; 375 spinlock_t lock;
290 void *priv; 376 void *priv;
291}; 377 };
292 378
293When registering a bank of locks, the hwspinlock driver only needs to 379When registering a bank of locks, the hwspinlock driver only needs to
294set the priv members of the locks. The rest of the members are set and 380set the priv members of the locks. The rest of the members are set and
295initialized by the hwspinlock core itself. 381initialized by the hwspinlock core itself.
296 382
2976. Implementation callbacks 383Implementation callbacks
384========================
298 385
299There are three possible callbacks defined in 'struct hwspinlock_ops': 386There are three possible callbacks defined in 'struct hwspinlock_ops'::
300 387
301struct hwspinlock_ops { 388 struct hwspinlock_ops {
302 int (*trylock)(struct hwspinlock *lock); 389 int (*trylock)(struct hwspinlock *lock);
303 void (*unlock)(struct hwspinlock *lock); 390 void (*unlock)(struct hwspinlock *lock);
304 void (*relax)(struct hwspinlock *lock); 391 void (*relax)(struct hwspinlock *lock);
305}; 392 };
306 393
307The first two callbacks are mandatory: 394The first two callbacks are mandatory:
308 395
309The ->trylock() callback should make a single attempt to take the lock, and 396The ->trylock() callback should make a single attempt to take the lock, and
310return 0 on failure and 1 on success. This callback may _not_ sleep. 397return 0 on failure and 1 on success. This callback may **not** sleep.
311 398
312The ->unlock() callback releases the lock. It always succeed, and it, too, 399The ->unlock() callback releases the lock. It always succeed, and it, too,
313may _not_ sleep. 400may **not** sleep.
314 401
315The ->relax() callback is optional. It is called by hwspinlock core while 402The ->relax() callback is optional. It is called by hwspinlock core while
316spinning on a lock, and can be used by the underlying implementation to force 403spinning on a lock, and can be used by the underlying implementation to force
317a delay between two successive invocations of ->trylock(). It may _not_ sleep. 404a delay between two successive invocations of ->trylock(). It may **not** sleep.