diff options
author | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 13:12:29 -0400 |
---|---|---|
committer | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 19:58:18 -0400 |
commit | daa138a58c802e7b4c2fb73f9b85bb082616ef43 (patch) | |
tree | be913e8e3745bb367d2ba371598f447649102cfc /fs/btrfs/locking.c | |
parent | 6869b7b206595ae0e326f59719090351eb8f4f5d (diff) | |
parent | fba0e448a2c5b297a4ddc1ec4e48f4aa6600a1c9 (diff) |
Merge branch 'odp_fixes' into hmm.git
From rdma.git
Jason Gunthorpe says:
====================
This is a collection of general cleanups for ODP to clarify some of the
flows around umem creation and use of the interval tree.
====================
The branch is based on v5.3-rc5 due to dependencies, and is being taken
into hmm.git due to dependencies in the next patches.
* odp_fixes:
RDMA/mlx5: Use odp instead of mr->umem in pagefault_mr
RDMA/mlx5: Use ib_umem_start instead of umem.address
RDMA/core: Make invalidate_range a device operation
RDMA/odp: Use kvcalloc for the dma_list and page_list
RDMA/odp: Check for overflow when computing the umem_odp end
RDMA/odp: Provide ib_umem_odp_release() to undo the allocs
RDMA/odp: Split creating a umem_odp from ib_umem_get
RDMA/odp: Make the three ways to create a umem_odp clear
RMDA/odp: Consolidate umem_odp initialization
RDMA/odp: Make it clearer when a umem is an implicit ODP umem
RDMA/odp: Iterate over the whole rbtree directly
RDMA/odp: Use the common interval tree library instead of generic
RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'fs/btrfs/locking.c')
-rw-r--r-- | fs/btrfs/locking.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index 98fccce4208c..393eceda57c8 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c | |||
@@ -346,9 +346,12 @@ void btrfs_tree_unlock(struct extent_buffer *eb) | |||
346 | if (blockers) { | 346 | if (blockers) { |
347 | btrfs_assert_no_spinning_writers(eb); | 347 | btrfs_assert_no_spinning_writers(eb); |
348 | eb->blocking_writers--; | 348 | eb->blocking_writers--; |
349 | /* Use the lighter barrier after atomic */ | 349 | /* |
350 | smp_mb__after_atomic(); | 350 | * We need to order modifying blocking_writers above with |
351 | cond_wake_up_nomb(&eb->write_lock_wq); | 351 | * actually waking up the sleepers to ensure they see the |
352 | * updated value of blocking_writers | ||
353 | */ | ||
354 | cond_wake_up(&eb->write_lock_wq); | ||
352 | } else { | 355 | } else { |
353 | btrfs_assert_spinning_writers_put(eb); | 356 | btrfs_assert_spinning_writers_put(eb); |
354 | write_unlock(&eb->lock); | 357 | write_unlock(&eb->lock); |