aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/core-api/xarray.rst52
-rw-r--r--fs/dax.c60
-rw-r--r--fs/nilfs2/btnode.c4
-rw-r--r--include/linux/xarray.h267
-rw-r--r--lib/test_xarray.c50
-rw-r--r--lib/xarray.c139
6 files changed, 387 insertions, 185 deletions
diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index a4e705108f42..dbe96cb5558e 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -74,7 +74,8 @@ using :c:func:`xa_load`. xa_store will overwrite any entry with the
74new entry and return the previous entry stored at that index. You can 74new entry and return the previous entry stored at that index. You can
75use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a 75use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a
76``NULL`` entry. There is no difference between an entry that has never 76``NULL`` entry. There is no difference between an entry that has never
77been stored to and one that has most recently had ``NULL`` stored to it. 77been stored to, one that has been erased and one that has most recently
78had ``NULL`` stored to it.
78 79
79You can conditionally replace an entry at an index by using 80You can conditionally replace an entry at an index by using
80:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if 81:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if
@@ -105,23 +106,44 @@ may result in the entry being marked at some, but not all of the other
105indices. Storing into one index may result in the entry retrieved by 106indices. Storing into one index may result in the entry retrieved by
106some, but not all of the other indices changing. 107some, but not all of the other indices changing.
107 108
109Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`
110will not need to allocate memory. The :c:func:`xa_reserve` function
111will store a reserved entry at the indicated index. Users of the normal
112API will see this entry as containing ``NULL``. If you do not need to
113use the reserved entry, you can call :c:func:`xa_release` to remove the
114unused entry. If another user has stored to the entry in the meantime,
115:c:func:`xa_release` will do nothing; if instead you want the entry to
116become ``NULL``, you should use :c:func:`xa_erase`.
117
118If all entries in the array are ``NULL``, the :c:func:`xa_empty` function
119will return ``true``.
120
108Finally, you can remove all entries from an XArray by calling 121Finally, you can remove all entries from an XArray by calling
109:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish 122:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish
110to free the entries first. You can do this by iterating over all present 123to free the entries first. You can do this by iterating over all present
111entries in the XArray using the :c:func:`xa_for_each` iterator. 124entries in the XArray using the :c:func:`xa_for_each` iterator.
112 125
113ID assignment 126Allocating XArrays
114------------- 127------------------
128
129If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or
130initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
131the XArray changes to track whether entries are in use or not.
115 132
116You can call :c:func:`xa_alloc` to store the entry at any unused index 133You can call :c:func:`xa_alloc` to store the entry at any unused index
117in the XArray. If you need to modify the array from interrupt context, 134in the XArray. If you need to modify the array from interrupt context,
118you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable 135you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable
119interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating 136interrupts while allocating the ID.
120a ``NULL`` pointer does not delete an entry. Instead it reserves an 137
121entry like :c:func:`xa_reserve` and you can release it using either 138Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`
122:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the 139will mark the entry as being allocated. Unlike a normal XArray, storing
123XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised 140``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.
124by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 141To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if
142you only want to free the entry if it's ``NULL``).
143
144You cannot use ``XA_MARK_0`` with an allocating XArray as this mark
145is used to track whether an entry is free or not. The other marks are
146available for your use.
125 147
126Memory allocation 148Memory allocation
127----------------- 149-----------------
@@ -158,6 +180,8 @@ Takes RCU read lock:
158 180
159Takes xa_lock internally: 181Takes xa_lock internally:
160 * :c:func:`xa_store` 182 * :c:func:`xa_store`
183 * :c:func:`xa_store_bh`
184 * :c:func:`xa_store_irq`
161 * :c:func:`xa_insert` 185 * :c:func:`xa_insert`
162 * :c:func:`xa_erase` 186 * :c:func:`xa_erase`
163 * :c:func:`xa_erase_bh` 187 * :c:func:`xa_erase_bh`
@@ -167,6 +191,9 @@ Takes xa_lock internally:
167 * :c:func:`xa_alloc` 191 * :c:func:`xa_alloc`
168 * :c:func:`xa_alloc_bh` 192 * :c:func:`xa_alloc_bh`
169 * :c:func:`xa_alloc_irq` 193 * :c:func:`xa_alloc_irq`
194 * :c:func:`xa_reserve`
195 * :c:func:`xa_reserve_bh`
196 * :c:func:`xa_reserve_irq`
170 * :c:func:`xa_destroy` 197 * :c:func:`xa_destroy`
171 * :c:func:`xa_set_mark` 198 * :c:func:`xa_set_mark`
172 * :c:func:`xa_clear_mark` 199 * :c:func:`xa_clear_mark`
@@ -177,6 +204,7 @@ Assumes xa_lock held on entry:
177 * :c:func:`__xa_erase` 204 * :c:func:`__xa_erase`
178 * :c:func:`__xa_cmpxchg` 205 * :c:func:`__xa_cmpxchg`
179 * :c:func:`__xa_alloc` 206 * :c:func:`__xa_alloc`
207 * :c:func:`__xa_reserve`
180 * :c:func:`__xa_set_mark` 208 * :c:func:`__xa_set_mark`
181 * :c:func:`__xa_clear_mark` 209 * :c:func:`__xa_clear_mark`
182 210
@@ -234,7 +262,8 @@ Sharing the XArray with interrupt context is also possible, either
234using :c:func:`xa_lock_irqsave` in both the interrupt handler and process 262using :c:func:`xa_lock_irqsave` in both the interrupt handler and process
235context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock` 263context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`
236in the interrupt handler. Some of the more common patterns have helper 264in the interrupt handler. Some of the more common patterns have helper
237functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 265functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,
266:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.
238 267
239Sometimes you need to protect access to the XArray with a mutex because 268Sometimes you need to protect access to the XArray with a mutex because
240that lock sits above another mutex in the locking hierarchy. That does 269that lock sits above another mutex in the locking hierarchy. That does
@@ -322,7 +351,8 @@ to :c:func:`xas_retry`, and retry the operation if it returns ``true``.
322 - :c:func:`xa_is_zero` 351 - :c:func:`xa_is_zero`
323 - Zero entries appear as ``NULL`` through the Normal API, but occupy 352 - Zero entries appear as ``NULL`` through the Normal API, but occupy
324 an entry in the XArray which can be used to reserve the index for 353 an entry in the XArray which can be used to reserve the index for
325 future use. 354 future use. This is used by allocating XArrays for allocated entries
355 which are ``NULL``.
326 356
327Other internal entries may be added in the future. As far as possible, they 357Other internal entries may be added in the future. As far as possible, they
328will be handled by :c:func:`xas_retry`. 358will be handled by :c:func:`xas_retry`.
diff --git a/fs/dax.c b/fs/dax.c
index 616e36ea6aaa..9bcce89ea18e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -98,12 +98,6 @@ static void *dax_make_entry(pfn_t pfn, unsigned long flags)
98 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); 98 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT));
99} 99}
100 100
101static void *dax_make_page_entry(struct page *page)
102{
103 pfn_t pfn = page_to_pfn_t(page);
104 return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0);
105}
106
107static bool dax_is_locked(void *entry) 101static bool dax_is_locked(void *entry)
108{ 102{
109 return xa_to_value(entry) & DAX_LOCKED; 103 return xa_to_value(entry) & DAX_LOCKED;
@@ -116,12 +110,12 @@ static unsigned int dax_entry_order(void *entry)
116 return 0; 110 return 0;
117} 111}
118 112
119static int dax_is_pmd_entry(void *entry) 113static unsigned long dax_is_pmd_entry(void *entry)
120{ 114{
121 return xa_to_value(entry) & DAX_PMD; 115 return xa_to_value(entry) & DAX_PMD;
122} 116}
123 117
124static int dax_is_pte_entry(void *entry) 118static bool dax_is_pte_entry(void *entry)
125{ 119{
126 return !(xa_to_value(entry) & DAX_PMD); 120 return !(xa_to_value(entry) & DAX_PMD);
127} 121}
@@ -222,9 +216,8 @@ static void *get_unlocked_entry(struct xa_state *xas)
222 ewait.wait.func = wake_exceptional_entry_func; 216 ewait.wait.func = wake_exceptional_entry_func;
223 217
224 for (;;) { 218 for (;;) {
225 entry = xas_load(xas); 219 entry = xas_find_conflict(xas);
226 if (!entry || xa_is_internal(entry) || 220 if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
227 WARN_ON_ONCE(!xa_is_value(entry)) ||
228 !dax_is_locked(entry)) 221 !dax_is_locked(entry))
229 return entry; 222 return entry;
230 223
@@ -255,6 +248,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
255{ 248{
256 void *old; 249 void *old;
257 250
251 BUG_ON(dax_is_locked(entry));
258 xas_reset(xas); 252 xas_reset(xas);
259 xas_lock_irq(xas); 253 xas_lock_irq(xas);
260 old = xas_store(xas, entry); 254 old = xas_store(xas, entry);
@@ -352,16 +346,27 @@ static struct page *dax_busy_page(void *entry)
352 return NULL; 346 return NULL;
353} 347}
354 348
349/*
350 * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page
351 * @page: The page whose entry we want to lock
352 *
353 * Context: Process context.
354 * Return: %true if the entry was locked or does not need to be locked.
355 */
355bool dax_lock_mapping_entry(struct page *page) 356bool dax_lock_mapping_entry(struct page *page)
356{ 357{
357 XA_STATE(xas, NULL, 0); 358 XA_STATE(xas, NULL, 0);
358 void *entry; 359 void *entry;
360 bool locked;
359 361
362 /* Ensure page->mapping isn't freed while we look at it */
363 rcu_read_lock();
360 for (;;) { 364 for (;;) {
361 struct address_space *mapping = READ_ONCE(page->mapping); 365 struct address_space *mapping = READ_ONCE(page->mapping);
362 366
367 locked = false;
363 if (!dax_mapping(mapping)) 368 if (!dax_mapping(mapping))
364 return false; 369 break;
365 370
366 /* 371 /*
367 * In the device-dax case there's no need to lock, a 372 * In the device-dax case there's no need to lock, a
@@ -370,8 +375,9 @@ bool dax_lock_mapping_entry(struct page *page)
370 * otherwise we would not have a valid pfn_to_page() 375 * otherwise we would not have a valid pfn_to_page()
371 * translation. 376 * translation.
372 */ 377 */
378 locked = true;
373 if (S_ISCHR(mapping->host->i_mode)) 379 if (S_ISCHR(mapping->host->i_mode))
374 return true; 380 break;
375 381
376 xas.xa = &mapping->i_pages; 382 xas.xa = &mapping->i_pages;
377 xas_lock_irq(&xas); 383 xas_lock_irq(&xas);
@@ -382,28 +388,35 @@ bool dax_lock_mapping_entry(struct page *page)
382 xas_set(&xas, page->index); 388 xas_set(&xas, page->index);
383 entry = xas_load(&xas); 389 entry = xas_load(&xas);
384 if (dax_is_locked(entry)) { 390 if (dax_is_locked(entry)) {
391 rcu_read_unlock();
385 entry = get_unlocked_entry(&xas); 392 entry = get_unlocked_entry(&xas);
386 /* Did the page move while we slept? */ 393 xas_unlock_irq(&xas);
387 if (dax_to_pfn(entry) != page_to_pfn(page)) { 394 put_unlocked_entry(&xas, entry);
388 xas_unlock_irq(&xas); 395 rcu_read_lock();
389 continue; 396 continue;
390 }
391 } 397 }
392 dax_lock_entry(&xas, entry); 398 dax_lock_entry(&xas, entry);
393 xas_unlock_irq(&xas); 399 xas_unlock_irq(&xas);
394 return true; 400 break;
395 } 401 }
402 rcu_read_unlock();
403 return locked;
396} 404}
397 405
398void dax_unlock_mapping_entry(struct page *page) 406void dax_unlock_mapping_entry(struct page *page)
399{ 407{
400 struct address_space *mapping = page->mapping; 408 struct address_space *mapping = page->mapping;
401 XA_STATE(xas, &mapping->i_pages, page->index); 409 XA_STATE(xas, &mapping->i_pages, page->index);
410 void *entry;
402 411
403 if (S_ISCHR(mapping->host->i_mode)) 412 if (S_ISCHR(mapping->host->i_mode))
404 return; 413 return;
405 414
406 dax_unlock_entry(&xas, dax_make_page_entry(page)); 415 rcu_read_lock();
416 entry = xas_load(&xas);
417 rcu_read_unlock();
418 entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry));
419 dax_unlock_entry(&xas, entry);
407} 420}
408 421
409/* 422/*
@@ -445,11 +458,9 @@ static void *grab_mapping_entry(struct xa_state *xas,
445retry: 458retry:
446 xas_lock_irq(xas); 459 xas_lock_irq(xas);
447 entry = get_unlocked_entry(xas); 460 entry = get_unlocked_entry(xas);
448 if (xa_is_internal(entry))
449 goto fallback;
450 461
451 if (entry) { 462 if (entry) {
452 if (WARN_ON_ONCE(!xa_is_value(entry))) { 463 if (!xa_is_value(entry)) {
453 xas_set_err(xas, EIO); 464 xas_set_err(xas, EIO);
454 goto out_unlock; 465 goto out_unlock;
455 } 466 }
@@ -1628,8 +1639,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
1628 /* Did we race with someone splitting entry or so? */ 1639 /* Did we race with someone splitting entry or so? */
1629 if (!entry || 1640 if (!entry ||
1630 (order == 0 && !dax_is_pte_entry(entry)) || 1641 (order == 0 && !dax_is_pte_entry(entry)) ||
1631 (order == PMD_ORDER && (xa_is_internal(entry) || 1642 (order == PMD_ORDER && !dax_is_pmd_entry(entry))) {
1632 !dax_is_pmd_entry(entry)))) {
1633 put_unlocked_entry(&xas, entry); 1643 put_unlocked_entry(&xas, entry);
1634 xas_unlock_irq(&xas); 1644 xas_unlock_irq(&xas);
1635 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf, 1645 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index de99db518571..f2129a5d9f23 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -266,9 +266,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
266 return; 266 return;
267 267
268 if (nbh == NULL) { /* blocksize == pagesize */ 268 if (nbh == NULL) { /* blocksize == pagesize */
269 xa_lock_irq(&btnc->i_pages); 269 xa_erase_irq(&btnc->i_pages, newkey);
270 __xa_erase(&btnc->i_pages, newkey);
271 xa_unlock_irq(&btnc->i_pages);
272 unlock_page(ctxt->bh->b_page); 270 unlock_page(ctxt->bh->b_page);
273 } else 271 } else
274 brelse(nbh); 272 brelse(nbh);
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index d9514928ddac..564892e19f8c 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -289,9 +289,7 @@ struct xarray {
289void xa_init_flags(struct xarray *, gfp_t flags); 289void xa_init_flags(struct xarray *, gfp_t flags);
290void *xa_load(struct xarray *, unsigned long index); 290void *xa_load(struct xarray *, unsigned long index);
291void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); 291void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
292void *xa_cmpxchg(struct xarray *, unsigned long index, 292void *xa_erase(struct xarray *, unsigned long index);
293 void *old, void *entry, gfp_t);
294int xa_reserve(struct xarray *, unsigned long index, gfp_t);
295void *xa_store_range(struct xarray *, unsigned long first, unsigned long last, 293void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,
296 void *entry, gfp_t); 294 void *entry, gfp_t);
297bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); 295bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);
@@ -344,65 +342,6 @@ static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)
344} 342}
345 343
346/** 344/**
347 * xa_erase() - Erase this entry from the XArray.
348 * @xa: XArray.
349 * @index: Index of entry.
350 *
351 * This function is the equivalent of calling xa_store() with %NULL as
352 * the third argument. The XArray does not need to allocate memory, so
353 * the user does not need to provide GFP flags.
354 *
355 * Context: Process context. Takes and releases the xa_lock.
356 * Return: The entry which used to be at this index.
357 */
358static inline void *xa_erase(struct xarray *xa, unsigned long index)
359{
360 return xa_store(xa, index, NULL, 0);
361}
362
363/**
364 * xa_insert() - Store this entry in the XArray unless another entry is
365 * already present.
366 * @xa: XArray.
367 * @index: Index into array.
368 * @entry: New entry.
369 * @gfp: Memory allocation flags.
370 *
371 * If you would rather see the existing entry in the array, use xa_cmpxchg().
372 * This function is for users who don't care what the entry is, only that
373 * one is present.
374 *
375 * Context: Process context. Takes and releases the xa_lock.
376 * May sleep if the @gfp flags permit.
377 * Return: 0 if the store succeeded. -EEXIST if another entry was present.
378 * -ENOMEM if memory could not be allocated.
379 */
380static inline int xa_insert(struct xarray *xa, unsigned long index,
381 void *entry, gfp_t gfp)
382{
383 void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
384 if (!curr)
385 return 0;
386 if (xa_is_err(curr))
387 return xa_err(curr);
388 return -EEXIST;
389}
390
391/**
392 * xa_release() - Release a reserved entry.
393 * @xa: XArray.
394 * @index: Index of entry.
395 *
396 * After calling xa_reserve(), you can call this function to release the
397 * reservation. If the entry at @index has been stored to, this function
398 * will do nothing.
399 */
400static inline void xa_release(struct xarray *xa, unsigned long index)
401{
402 xa_cmpxchg(xa, index, NULL, NULL, 0);
403}
404
405/**
406 * xa_for_each() - Iterate over a portion of an XArray. 345 * xa_for_each() - Iterate over a portion of an XArray.
407 * @xa: XArray. 346 * @xa: XArray.
408 * @entry: Entry retrieved from array. 347 * @entry: Entry retrieved from array.
@@ -455,6 +394,7 @@ void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
455void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, 394void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
456 void *entry, gfp_t); 395 void *entry, gfp_t);
457int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t); 396int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t);
397int __xa_reserve(struct xarray *, unsigned long index, gfp_t);
458void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t); 398void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);
459void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t); 399void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);
460 400
@@ -487,6 +427,58 @@ static inline int __xa_insert(struct xarray *xa, unsigned long index,
487} 427}
488 428
489/** 429/**
430 * xa_store_bh() - Store this entry in the XArray.
431 * @xa: XArray.
432 * @index: Index into array.
433 * @entry: New entry.
434 * @gfp: Memory allocation flags.
435 *
436 * This function is like calling xa_store() except it disables softirqs
437 * while holding the array lock.
438 *
439 * Context: Any context. Takes and releases the xa_lock while
440 * disabling softirqs.
441 * Return: The entry which used to be at this index.
442 */
443static inline void *xa_store_bh(struct xarray *xa, unsigned long index,
444 void *entry, gfp_t gfp)
445{
446 void *curr;
447
448 xa_lock_bh(xa);
449 curr = __xa_store(xa, index, entry, gfp);
450 xa_unlock_bh(xa);
451
452 return curr;
453}
454
455/**
456 * xa_store_irq() - Erase this entry from the XArray.
457 * @xa: XArray.
458 * @index: Index into array.
459 * @entry: New entry.
460 * @gfp: Memory allocation flags.
461 *
462 * This function is like calling xa_store() except it disables interrupts
463 * while holding the array lock.
464 *
465 * Context: Process context. Takes and releases the xa_lock while
466 * disabling interrupts.
467 * Return: The entry which used to be at this index.
468 */
469static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
470 void *entry, gfp_t gfp)
471{
472 void *curr;
473
474 xa_lock_irq(xa);
475 curr = __xa_store(xa, index, entry, gfp);
476 xa_unlock_irq(xa);
477
478 return curr;
479}
480
481/**
490 * xa_erase_bh() - Erase this entry from the XArray. 482 * xa_erase_bh() - Erase this entry from the XArray.
491 * @xa: XArray. 483 * @xa: XArray.
492 * @index: Index of entry. 484 * @index: Index of entry.
@@ -495,7 +487,7 @@ static inline int __xa_insert(struct xarray *xa, unsigned long index,
495 * the third argument. The XArray does not need to allocate memory, so 487 * the third argument. The XArray does not need to allocate memory, so
496 * the user does not need to provide GFP flags. 488 * the user does not need to provide GFP flags.
497 * 489 *
498 * Context: Process context. Takes and releases the xa_lock while 490 * Context: Any context. Takes and releases the xa_lock while
499 * disabling softirqs. 491 * disabling softirqs.
500 * Return: The entry which used to be at this index. 492 * Return: The entry which used to be at this index.
501 */ 493 */
@@ -535,6 +527,61 @@ static inline void *xa_erase_irq(struct xarray *xa, unsigned long index)
535} 527}
536 528
537/** 529/**
530 * xa_cmpxchg() - Conditionally replace an entry in the XArray.
531 * @xa: XArray.
532 * @index: Index into array.
533 * @old: Old value to test against.
534 * @entry: New value to place in array.
535 * @gfp: Memory allocation flags.
536 *
537 * If the entry at @index is the same as @old, replace it with @entry.
538 * If the return value is equal to @old, then the exchange was successful.
539 *
540 * Context: Any context. Takes and releases the xa_lock. May sleep
541 * if the @gfp flags permit.
542 * Return: The old value at this index or xa_err() if an error happened.
543 */
544static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,
545 void *old, void *entry, gfp_t gfp)
546{
547 void *curr;
548
549 xa_lock(xa);
550 curr = __xa_cmpxchg(xa, index, old, entry, gfp);
551 xa_unlock(xa);
552
553 return curr;
554}
555
556/**
557 * xa_insert() - Store this entry in the XArray unless another entry is
558 * already present.
559 * @xa: XArray.
560 * @index: Index into array.
561 * @entry: New entry.
562 * @gfp: Memory allocation flags.
563 *
564 * If you would rather see the existing entry in the array, use xa_cmpxchg().
565 * This function is for users who don't care what the entry is, only that
566 * one is present.
567 *
568 * Context: Process context. Takes and releases the xa_lock.
569 * May sleep if the @gfp flags permit.
570 * Return: 0 if the store succeeded. -EEXIST if another entry was present.
571 * -ENOMEM if memory could not be allocated.
572 */
573static inline int xa_insert(struct xarray *xa, unsigned long index,
574 void *entry, gfp_t gfp)
575{
576 void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
577 if (!curr)
578 return 0;
579 if (xa_is_err(curr))
580 return xa_err(curr);
581 return -EEXIST;
582}
583
584/**
538 * xa_alloc() - Find somewhere to store this entry in the XArray. 585 * xa_alloc() - Find somewhere to store this entry in the XArray.
539 * @xa: XArray. 586 * @xa: XArray.
540 * @id: Pointer to ID. 587 * @id: Pointer to ID.
@@ -575,7 +622,7 @@ static inline int xa_alloc(struct xarray *xa, u32 *id, u32 max, void *entry,
575 * Updates the @id pointer with the index, then stores the entry at that 622 * Updates the @id pointer with the index, then stores the entry at that
576 * index. A concurrent lookup will not see an uninitialised @id. 623 * index. A concurrent lookup will not see an uninitialised @id.
577 * 624 *
578 * Context: Process context. Takes and releases the xa_lock while 625 * Context: Any context. Takes and releases the xa_lock while
579 * disabling softirqs. May sleep if the @gfp flags permit. 626 * disabling softirqs. May sleep if the @gfp flags permit.
580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if 627 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if
581 * there is no more space in the XArray. 628 * there is no more space in the XArray.
@@ -621,6 +668,98 @@ static inline int xa_alloc_irq(struct xarray *xa, u32 *id, u32 max, void *entry,
621 return err; 668 return err;
622} 669}
623 670
671/**
672 * xa_reserve() - Reserve this index in the XArray.
673 * @xa: XArray.
674 * @index: Index into array.
675 * @gfp: Memory allocation flags.
676 *
677 * Ensures there is somewhere to store an entry at @index in the array.
678 * If there is already something stored at @index, this function does
679 * nothing. If there was nothing there, the entry is marked as reserved.
680 * Loading from a reserved entry returns a %NULL pointer.
681 *
682 * If you do not use the entry that you have reserved, call xa_release()
683 * or xa_erase() to free any unnecessary memory.
684 *
685 * Context: Any context. Takes and releases the xa_lock.
686 * May sleep if the @gfp flags permit.
687 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
688 */
689static inline
690int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
691{
692 int ret;
693
694 xa_lock(xa);
695 ret = __xa_reserve(xa, index, gfp);
696 xa_unlock(xa);
697
698 return ret;
699}
700
701/**
702 * xa_reserve_bh() - Reserve this index in the XArray.
703 * @xa: XArray.
704 * @index: Index into array.
705 * @gfp: Memory allocation flags.
706 *
707 * A softirq-disabling version of xa_reserve().
708 *
709 * Context: Any context. Takes and releases the xa_lock while
710 * disabling softirqs.
711 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
712 */
713static inline
714int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)
715{
716 int ret;
717
718 xa_lock_bh(xa);
719 ret = __xa_reserve(xa, index, gfp);
720 xa_unlock_bh(xa);
721
722 return ret;
723}
724
725/**
726 * xa_reserve_irq() - Reserve this index in the XArray.
727 * @xa: XArray.
728 * @index: Index into array.
729 * @gfp: Memory allocation flags.
730 *
731 * An interrupt-disabling version of xa_reserve().
732 *
733 * Context: Process context. Takes and releases the xa_lock while
734 * disabling interrupts.
735 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
736 */
737static inline
738int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)
739{
740 int ret;
741
742 xa_lock_irq(xa);
743 ret = __xa_reserve(xa, index, gfp);
744 xa_unlock_irq(xa);
745
746 return ret;
747}
748
749/**
750 * xa_release() - Release a reserved entry.
751 * @xa: XArray.
752 * @index: Index of entry.
753 *
754 * After calling xa_reserve(), you can call this function to release the
755 * reservation. If the entry at @index has been stored to, this function
756 * will do nothing.
757 */
758static inline void xa_release(struct xarray *xa, unsigned long index)
759{
760 xa_cmpxchg(xa, index, NULL, NULL, 0);
761}
762
624/* Everything below here is the Advanced API. Proceed with caution. */ 763/* Everything below here is the Advanced API. Proceed with caution. */
625 764
626/* 765/*
diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index aa47754150ce..0598e86af8fc 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -208,15 +208,19 @@ static noinline void check_xa_mark_1(struct xarray *xa, unsigned long index)
208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2)); 208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2));
209 209
210 /* We should see two elements in the array */ 210 /* We should see two elements in the array */
211 rcu_read_lock();
211 xas_for_each(&xas, entry, ULONG_MAX) 212 xas_for_each(&xas, entry, ULONG_MAX)
212 seen++; 213 seen++;
214 rcu_read_unlock();
213 XA_BUG_ON(xa, seen != 2); 215 XA_BUG_ON(xa, seen != 2);
214 216
215 /* One of which is marked */ 217 /* One of which is marked */
216 xas_set(&xas, 0); 218 xas_set(&xas, 0);
217 seen = 0; 219 seen = 0;
220 rcu_read_lock();
218 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0) 221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)
219 seen++; 222 seen++;
223 rcu_read_unlock();
220 XA_BUG_ON(xa, seen != 1); 224 XA_BUG_ON(xa, seen != 1);
221 } 225 }
222 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); 226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0));
@@ -373,6 +377,12 @@ static noinline void check_reserve(struct xarray *xa)
373 xa_erase_index(xa, 12345678); 377 xa_erase_index(xa, 12345678);
374 XA_BUG_ON(xa, !xa_empty(xa)); 378 XA_BUG_ON(xa, !xa_empty(xa));
375 379
380 /* And so does xa_insert */
381 xa_reserve(xa, 12345678, GFP_KERNEL);
382 XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0);
383 xa_erase_index(xa, 12345678);
384 XA_BUG_ON(xa, !xa_empty(xa));
385
376 /* Can iterate through a reserved entry */ 386 /* Can iterate through a reserved entry */
377 xa_store_index(xa, 5, GFP_KERNEL); 387 xa_store_index(xa, 5, GFP_KERNEL);
378 xa_reserve(xa, 6, GFP_KERNEL); 388 xa_reserve(xa, 6, GFP_KERNEL);
@@ -436,7 +446,9 @@ static noinline void check_multi_store_1(struct xarray *xa, unsigned long index,
436 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 446 XA_BUG_ON(xa, xa_load(xa, max) != NULL);
437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 447 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);
438 448
449 xas_lock(&xas);
439 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index)); 450 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index));
451 xas_unlock(&xas);
440 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min)); 452 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min));
441 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min)); 453 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min));
442 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 454 XA_BUG_ON(xa, xa_load(xa, max) != NULL);
@@ -452,9 +464,11 @@ static noinline void check_multi_store_2(struct xarray *xa, unsigned long index,
452 XA_STATE(xas, xa, index); 464 XA_STATE(xas, xa, index);
453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 465 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);
454 466
467 xas_lock(&xas);
455 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0)); 468 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));
456 XA_BUG_ON(xa, xas.xa_index != index); 469 XA_BUG_ON(xa, xas.xa_index != index);
457 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1)); 470 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));
471 xas_unlock(&xas);
458 XA_BUG_ON(xa, !xa_empty(xa)); 472 XA_BUG_ON(xa, !xa_empty(xa));
459} 473}
460#endif 474#endif
@@ -498,7 +512,7 @@ static noinline void check_multi_store(struct xarray *xa)
498 rcu_read_unlock(); 512 rcu_read_unlock();
499 513
500 /* We can erase multiple values with a single store */ 514 /* We can erase multiple values with a single store */
501 xa_store_order(xa, 0, 63, NULL, GFP_KERNEL); 515 xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);
502 XA_BUG_ON(xa, !xa_empty(xa)); 516 XA_BUG_ON(xa, !xa_empty(xa));
503 517
504 /* Even when the first slot is empty but the others aren't */ 518 /* Even when the first slot is empty but the others aren't */
@@ -702,7 +716,7 @@ static noinline void check_multi_find_2(struct xarray *xa)
702 } 716 }
703} 717}
704 718
705static noinline void check_find(struct xarray *xa) 719static noinline void check_find_1(struct xarray *xa)
706{ 720{
707 unsigned long i, j, k; 721 unsigned long i, j, k;
708 722
@@ -748,6 +762,34 @@ static noinline void check_find(struct xarray *xa)
748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 762 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0));
749 } 763 }
750 XA_BUG_ON(xa, !xa_empty(xa)); 764 XA_BUG_ON(xa, !xa_empty(xa));
765}
766
767static noinline void check_find_2(struct xarray *xa)
768{
769 void *entry;
770 unsigned long i, j, index = 0;
771
772 xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {
773 XA_BUG_ON(xa, true);
774 }
775
776 for (i = 0; i < 1024; i++) {
777 xa_store_index(xa, index, GFP_KERNEL);
778 j = 0;
779 index = 0;
780 xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {
781 XA_BUG_ON(xa, xa_mk_value(index) != entry);
782 XA_BUG_ON(xa, index != j++);
783 }
784 }
785
786 xa_destroy(xa);
787}
788
789static noinline void check_find(struct xarray *xa)
790{
791 check_find_1(xa);
792 check_find_2(xa);
751 check_multi_find(xa); 793 check_multi_find(xa);
752 check_multi_find_2(xa); 794 check_multi_find_2(xa);
753} 795}
@@ -1067,7 +1109,7 @@ static noinline void check_store_range(struct xarray *xa)
1067 __check_store_range(xa, 4095 + i, 4095 + j); 1109 __check_store_range(xa, 4095 + i, 4095 + j);
1068 __check_store_range(xa, 4096 + i, 4096 + j); 1110 __check_store_range(xa, 4096 + i, 4096 + j);
1069 __check_store_range(xa, 123456 + i, 123456 + j); 1111 __check_store_range(xa, 123456 + i, 123456 + j);
1070 __check_store_range(xa, UINT_MAX + i, UINT_MAX + j); 1112 __check_store_range(xa, (1 << 24) + i, (1 << 24) + j);
1071 } 1113 }
1072 } 1114 }
1073} 1115}
@@ -1146,10 +1188,12 @@ static noinline void check_account(struct xarray *xa)
1146 XA_STATE(xas, xa, 1 << order); 1188 XA_STATE(xas, xa, 1 << order);
1147 1189
1148 xa_store_order(xa, 0, order, xa, GFP_KERNEL); 1190 xa_store_order(xa, 0, order, xa, GFP_KERNEL);
1191 rcu_read_lock();
1149 xas_load(&xas); 1192 xas_load(&xas);
1150 XA_BUG_ON(xa, xas.xa_node->count == 0); 1193 XA_BUG_ON(xa, xas.xa_node->count == 0);
1151 XA_BUG_ON(xa, xas.xa_node->count > (1 << order)); 1194 XA_BUG_ON(xa, xas.xa_node->count > (1 << order));
1152 XA_BUG_ON(xa, xas.xa_node->nr_values != 0); 1195 XA_BUG_ON(xa, xas.xa_node->nr_values != 0);
1196 rcu_read_unlock();
1153 1197
1154 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order), 1198 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order),
1155 GFP_KERNEL); 1199 GFP_KERNEL);
diff --git a/lib/xarray.c b/lib/xarray.c
index 8b176f009c08..bbacca576593 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -610,8 +610,8 @@ static int xas_expand(struct xa_state *xas, void *head)
610 * (see the xa_cmpxchg() implementation for an example). 610 * (see the xa_cmpxchg() implementation for an example).
611 * 611 *
612 * Return: If the slot already existed, returns the contents of this slot. 612 * Return: If the slot already existed, returns the contents of this slot.
613 * If the slot was newly created, returns NULL. If it failed to create the 613 * If the slot was newly created, returns %NULL. If it failed to create the
614 * slot, returns NULL and indicates the error in @xas. 614 * slot, returns %NULL and indicates the error in @xas.
615 */ 615 */
616static void *xas_create(struct xa_state *xas) 616static void *xas_create(struct xa_state *xas)
617{ 617{
@@ -1334,44 +1334,31 @@ void *__xa_erase(struct xarray *xa, unsigned long index)
1334 XA_STATE(xas, xa, index); 1334 XA_STATE(xas, xa, index);
1335 return xas_result(&xas, xas_store(&xas, NULL)); 1335 return xas_result(&xas, xas_store(&xas, NULL));
1336} 1336}
1337EXPORT_SYMBOL_GPL(__xa_erase); 1337EXPORT_SYMBOL(__xa_erase);
1338 1338
1339/** 1339/**
1340 * xa_store() - Store this entry in the XArray. 1340 * xa_erase() - Erase this entry from the XArray.
1341 * @xa: XArray. 1341 * @xa: XArray.
1342 * @index: Index into array. 1342 * @index: Index of entry.
1343 * @entry: New entry.
1344 * @gfp: Memory allocation flags.
1345 * 1343 *
1346 * After this function returns, loads from this index will return @entry. 1344 * This function is the equivalent of calling xa_store() with %NULL as
1347 * Storing into an existing multislot entry updates the entry of every index. 1345 * the third argument. The XArray does not need to allocate memory, so
1348 * The marks associated with @index are unaffected unless @entry is %NULL. 1346 * the user does not need to provide GFP flags.
1349 * 1347 *
1350 * Context: Process context. Takes and releases the xa_lock. May sleep 1348 * Context: Any context. Takes and releases the xa_lock.
1351 * if the @gfp flags permit. 1349 * Return: The entry which used to be at this index.
1352 * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry
1353 * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation
1354 * failed.
1355 */ 1350 */
1356void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1351void *xa_erase(struct xarray *xa, unsigned long index)
1357{ 1352{
1358 XA_STATE(xas, xa, index); 1353 void *entry;
1359 void *curr;
1360
1361 if (WARN_ON_ONCE(xa_is_internal(entry)))
1362 return XA_ERROR(-EINVAL);
1363 1354
1364 do { 1355 xa_lock(xa);
1365 xas_lock(&xas); 1356 entry = __xa_erase(xa, index);
1366 curr = xas_store(&xas, entry); 1357 xa_unlock(xa);
1367 if (xa_track_free(xa) && entry)
1368 xas_clear_mark(&xas, XA_FREE_MARK);
1369 xas_unlock(&xas);
1370 } while (xas_nomem(&xas, gfp));
1371 1358
1372 return xas_result(&xas, curr); 1359 return entry;
1373} 1360}
1374EXPORT_SYMBOL(xa_store); 1361EXPORT_SYMBOL(xa_erase);
1375 1362
1376/** 1363/**
1377 * __xa_store() - Store this entry in the XArray. 1364 * __xa_store() - Store this entry in the XArray.
@@ -1395,10 +1382,12 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
1395 1382
1396 if (WARN_ON_ONCE(xa_is_internal(entry))) 1383 if (WARN_ON_ONCE(xa_is_internal(entry)))
1397 return XA_ERROR(-EINVAL); 1384 return XA_ERROR(-EINVAL);
1385 if (xa_track_free(xa) && !entry)
1386 entry = XA_ZERO_ENTRY;
1398 1387
1399 do { 1388 do {
1400 curr = xas_store(&xas, entry); 1389 curr = xas_store(&xas, entry);
1401 if (xa_track_free(xa) && entry) 1390 if (xa_track_free(xa))
1402 xas_clear_mark(&xas, XA_FREE_MARK); 1391 xas_clear_mark(&xas, XA_FREE_MARK);
1403 } while (__xas_nomem(&xas, gfp)); 1392 } while (__xas_nomem(&xas, gfp));
1404 1393
@@ -1407,45 +1396,33 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
1407EXPORT_SYMBOL(__xa_store); 1396EXPORT_SYMBOL(__xa_store);
1408 1397
1409/** 1398/**
1410 * xa_cmpxchg() - Conditionally replace an entry in the XArray. 1399 * xa_store() - Store this entry in the XArray.
1411 * @xa: XArray. 1400 * @xa: XArray.
1412 * @index: Index into array. 1401 * @index: Index into array.
1413 * @old: Old value to test against. 1402 * @entry: New entry.
1414 * @entry: New value to place in array.
1415 * @gfp: Memory allocation flags. 1403 * @gfp: Memory allocation flags.
1416 * 1404 *
1417 * If the entry at @index is the same as @old, replace it with @entry. 1405 * After this function returns, loads from this index will return @entry.
1418 * If the return value is equal to @old, then the exchange was successful. 1406 * Storing into an existing multislot entry updates the entry of every index.
1407 * The marks associated with @index are unaffected unless @entry is %NULL.
1419 * 1408 *
1420 * Context: Process context. Takes and releases the xa_lock. May sleep 1409 * Context: Any context. Takes and releases the xa_lock.
1421 * if the @gfp flags permit. 1410 * May sleep if the @gfp flags permit.
1422 * Return: The old value at this index or xa_err() if an error happened. 1411 * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry
1412 * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation
1413 * failed.
1423 */ 1414 */
1424void *xa_cmpxchg(struct xarray *xa, unsigned long index, 1415void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
1425 void *old, void *entry, gfp_t gfp)
1426{ 1416{
1427 XA_STATE(xas, xa, index);
1428 void *curr; 1417 void *curr;
1429 1418
1430 if (WARN_ON_ONCE(xa_is_internal(entry))) 1419 xa_lock(xa);
1431 return XA_ERROR(-EINVAL); 1420 curr = __xa_store(xa, index, entry, gfp);
1432 1421 xa_unlock(xa);
1433 do {
1434 xas_lock(&xas);
1435 curr = xas_load(&xas);
1436 if (curr == XA_ZERO_ENTRY)
1437 curr = NULL;
1438 if (curr == old) {
1439 xas_store(&xas, entry);
1440 if (xa_track_free(xa) && entry)
1441 xas_clear_mark(&xas, XA_FREE_MARK);
1442 }
1443 xas_unlock(&xas);
1444 } while (xas_nomem(&xas, gfp));
1445 1422
1446 return xas_result(&xas, curr); 1423 return curr;
1447} 1424}
1448EXPORT_SYMBOL(xa_cmpxchg); 1425EXPORT_SYMBOL(xa_store);
1449 1426
1450/** 1427/**
1451 * __xa_cmpxchg() - Store this entry in the XArray. 1428 * __xa_cmpxchg() - Store this entry in the XArray.
@@ -1471,6 +1448,8 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
1471 1448
1472 if (WARN_ON_ONCE(xa_is_internal(entry))) 1449 if (WARN_ON_ONCE(xa_is_internal(entry)))
1473 return XA_ERROR(-EINVAL); 1450 return XA_ERROR(-EINVAL);
1451 if (xa_track_free(xa) && !entry)
1452 entry = XA_ZERO_ENTRY;
1474 1453
1475 do { 1454 do {
1476 curr = xas_load(&xas); 1455 curr = xas_load(&xas);
@@ -1478,7 +1457,7 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
1478 curr = NULL; 1457 curr = NULL;
1479 if (curr == old) { 1458 if (curr == old) {
1480 xas_store(&xas, entry); 1459 xas_store(&xas, entry);
1481 if (xa_track_free(xa) && entry) 1460 if (xa_track_free(xa))
1482 xas_clear_mark(&xas, XA_FREE_MARK); 1461 xas_clear_mark(&xas, XA_FREE_MARK);
1483 } 1462 }
1484 } while (__xas_nomem(&xas, gfp)); 1463 } while (__xas_nomem(&xas, gfp));
@@ -1488,7 +1467,7 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
1488EXPORT_SYMBOL(__xa_cmpxchg); 1467EXPORT_SYMBOL(__xa_cmpxchg);
1489 1468
1490/** 1469/**
1491 * xa_reserve() - Reserve this index in the XArray. 1470 * __xa_reserve() - Reserve this index in the XArray.
1492 * @xa: XArray. 1471 * @xa: XArray.
1493 * @index: Index into array. 1472 * @index: Index into array.
1494 * @gfp: Memory allocation flags. 1473 * @gfp: Memory allocation flags.
@@ -1496,33 +1475,32 @@ EXPORT_SYMBOL(__xa_cmpxchg);
1496 * Ensures there is somewhere to store an entry at @index in the array. 1475 * Ensures there is somewhere to store an entry at @index in the array.
1497 * If there is already something stored at @index, this function does 1476 * If there is already something stored at @index, this function does
1498 * nothing. If there was nothing there, the entry is marked as reserved. 1477 * nothing. If there was nothing there, the entry is marked as reserved.
1499 * Loads from @index will continue to see a %NULL pointer until a 1478 * Loading from a reserved entry returns a %NULL pointer.
1500 * subsequent store to @index.
1501 * 1479 *
1502 * If you do not use the entry that you have reserved, call xa_release() 1480 * If you do not use the entry that you have reserved, call xa_release()
1503 * or xa_erase() to free any unnecessary memory. 1481 * or xa_erase() to free any unnecessary memory.
1504 * 1482 *
1505 * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe 1483 * Context: Any context. Expects the xa_lock to be held on entry. May
1506 * if specified in XArray flags. May sleep if the @gfp flags permit. 1484 * release the lock, sleep and reacquire the lock if the @gfp flags permit.
1507 * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 1485 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
1508 */ 1486 */
1509int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1487int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
1510{ 1488{
1511 XA_STATE(xas, xa, index); 1489 XA_STATE(xas, xa, index);
1512 unsigned int lock_type = xa_lock_type(xa);
1513 void *curr; 1490 void *curr;
1514 1491
1515 do { 1492 do {
1516 xas_lock_type(&xas, lock_type);
1517 curr = xas_load(&xas); 1493 curr = xas_load(&xas);
1518 if (!curr) 1494 if (!curr) {
1519 xas_store(&xas, XA_ZERO_ENTRY); 1495 xas_store(&xas, XA_ZERO_ENTRY);
1520 xas_unlock_type(&xas, lock_type); 1496 if (xa_track_free(xa))
1521 } while (xas_nomem(&xas, gfp)); 1497 xas_clear_mark(&xas, XA_FREE_MARK);
1498 }
1499 } while (__xas_nomem(&xas, gfp));
1522 1500
1523 return xas_error(&xas); 1501 return xas_error(&xas);
1524} 1502}
1525EXPORT_SYMBOL(xa_reserve); 1503EXPORT_SYMBOL(__xa_reserve);
1526 1504
1527#ifdef CONFIG_XARRAY_MULTI 1505#ifdef CONFIG_XARRAY_MULTI
1528static void xas_set_range(struct xa_state *xas, unsigned long first, 1506static void xas_set_range(struct xa_state *xas, unsigned long first,
@@ -1587,8 +1565,9 @@ void *xa_store_range(struct xarray *xa, unsigned long first,
1587 do { 1565 do {
1588 xas_lock(&xas); 1566 xas_lock(&xas);
1589 if (entry) { 1567 if (entry) {
1590 unsigned int order = (last == ~0UL) ? 64 : 1568 unsigned int order = BITS_PER_LONG;
1591 ilog2(last + 1); 1569 if (last + 1)
1570 order = __ffs(last + 1);
1592 xas_set_order(&xas, last, order); 1571 xas_set_order(&xas, last, order);
1593 xas_create(&xas); 1572 xas_create(&xas);
1594 if (xas_error(&xas)) 1573 if (xas_error(&xas))
@@ -1662,7 +1641,7 @@ EXPORT_SYMBOL(__xa_alloc);
1662 * @index: Index of entry. 1641 * @index: Index of entry.
1663 * @mark: Mark number. 1642 * @mark: Mark number.
1664 * 1643 *
1665 * Attempting to set a mark on a NULL entry does not succeed. 1644 * Attempting to set a mark on a %NULL entry does not succeed.
1666 * 1645 *
1667 * Context: Any context. Expects xa_lock to be held on entry. 1646 * Context: Any context. Expects xa_lock to be held on entry.
1668 */ 1647 */
@@ -1674,7 +1653,7 @@ void __xa_set_mark(struct xarray *xa, unsigned long index, xa_mark_t mark)
1674 if (entry) 1653 if (entry)
1675 xas_set_mark(&xas, mark); 1654 xas_set_mark(&xas, mark);
1676} 1655}
1677EXPORT_SYMBOL_GPL(__xa_set_mark); 1656EXPORT_SYMBOL(__xa_set_mark);
1678 1657
1679/** 1658/**
1680 * __xa_clear_mark() - Clear this mark on this entry while locked. 1659 * __xa_clear_mark() - Clear this mark on this entry while locked.
@@ -1692,7 +1671,7 @@ void __xa_clear_mark(struct xarray *xa, unsigned long index, xa_mark_t mark)
1692 if (entry) 1671 if (entry)
1693 xas_clear_mark(&xas, mark); 1672 xas_clear_mark(&xas, mark);
1694} 1673}
1695EXPORT_SYMBOL_GPL(__xa_clear_mark); 1674EXPORT_SYMBOL(__xa_clear_mark);
1696 1675
1697/** 1676/**
1698 * xa_get_mark() - Inquire whether this mark is set on this entry. 1677 * xa_get_mark() - Inquire whether this mark is set on this entry.
@@ -1732,7 +1711,7 @@ EXPORT_SYMBOL(xa_get_mark);
1732 * @index: Index of entry. 1711 * @index: Index of entry.
1733 * @mark: Mark number. 1712 * @mark: Mark number.
1734 * 1713 *
1735 * Attempting to set a mark on a NULL entry does not succeed. 1714 * Attempting to set a mark on a %NULL entry does not succeed.
1736 * 1715 *
1737 * Context: Process context. Takes and releases the xa_lock. 1716 * Context: Process context. Takes and releases the xa_lock.
1738 */ 1717 */
@@ -1829,6 +1808,8 @@ void *xa_find_after(struct xarray *xa, unsigned long *indexp,
1829 entry = xas_find_marked(&xas, max, filter); 1808 entry = xas_find_marked(&xas, max, filter);
1830 else 1809 else
1831 entry = xas_find(&xas, max); 1810 entry = xas_find(&xas, max);
1811 if (xas.xa_node == XAS_BOUNDS)
1812 break;
1832 if (xas.xa_shift) { 1813 if (xas.xa_shift) {
1833 if (xas.xa_index & ((1UL << xas.xa_shift) - 1)) 1814 if (xas.xa_index & ((1UL << xas.xa_shift) - 1))
1834 continue; 1815 continue;
@@ -1899,7 +1880,7 @@ static unsigned int xas_extract_marked(struct xa_state *xas, void **dst,
1899 * 1880 *
1900 * The @filter may be an XArray mark value, in which case entries which are 1881 * The @filter may be an XArray mark value, in which case entries which are
1901 * marked with that mark will be copied. It may also be %XA_PRESENT, in 1882 * marked with that mark will be copied. It may also be %XA_PRESENT, in
1902 * which case all entries which are not NULL will be copied. 1883 * which case all entries which are not %NULL will be copied.
1903 * 1884 *
1904 * The entries returned may not represent a snapshot of the XArray at a 1885 * The entries returned may not represent a snapshot of the XArray at a
1905 * moment in time. For example, if another thread stores to index 5, then 1886 * moment in time. For example, if another thread stores to index 5, then