diff options
author | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 13:12:29 -0400 |
---|---|---|
committer | Jason Gunthorpe <jgg@mellanox.com> | 2019-08-21 19:58:18 -0400 |
commit | daa138a58c802e7b4c2fb73f9b85bb082616ef43 (patch) | |
tree | be913e8e3745bb367d2ba371598f447649102cfc /mm/hugetlb.c | |
parent | 6869b7b206595ae0e326f59719090351eb8f4f5d (diff) | |
parent | fba0e448a2c5b297a4ddc1ec4e48f4aa6600a1c9 (diff) |
Merge branch 'odp_fixes' into hmm.git
From rdma.git
Jason Gunthorpe says:
====================
This is a collection of general cleanups for ODP to clarify some of the
flows around umem creation and use of the interval tree.
====================
The branch is based on v5.3-rc5 due to dependencies, and is being taken
into hmm.git due to dependencies in the next patches.
* odp_fixes:
RDMA/mlx5: Use odp instead of mr->umem in pagefault_mr
RDMA/mlx5: Use ib_umem_start instead of umem.address
RDMA/core: Make invalidate_range a device operation
RDMA/odp: Use kvcalloc for the dma_list and page_list
RDMA/odp: Check for overflow when computing the umem_odp end
RDMA/odp: Provide ib_umem_odp_release() to undo the allocs
RDMA/odp: Split creating a umem_odp from ib_umem_get
RDMA/odp: Make the three ways to create a umem_odp clear
RMDA/odp: Consolidate umem_odp initialization
RDMA/odp: Make it clearer when a umem is an implicit ODP umem
RDMA/odp: Iterate over the whole rbtree directly
RDMA/odp: Use the common interval tree library instead of generic
RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r-- | mm/hugetlb.c | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ede7e7f5d1ab..6d7296dd11b8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c | |||
@@ -3856,6 +3856,25 @@ retry: | |||
3856 | 3856 | ||
3857 | page = alloc_huge_page(vma, haddr, 0); | 3857 | page = alloc_huge_page(vma, haddr, 0); |
3858 | if (IS_ERR(page)) { | 3858 | if (IS_ERR(page)) { |
3859 | /* | ||
3860 | * Returning error will result in faulting task being | ||
3861 | * sent SIGBUS. The hugetlb fault mutex prevents two | ||
3862 | * tasks from racing to fault in the same page which | ||
3863 | * could result in false unable to allocate errors. | ||
3864 | * Page migration does not take the fault mutex, but | ||
3865 | * does a clear then write of pte's under page table | ||
3866 | * lock. Page fault code could race with migration, | ||
3867 | * notice the clear pte and try to allocate a page | ||
3868 | * here. Before returning error, get ptl and make | ||
3869 | * sure there really is no pte entry. | ||
3870 | */ | ||
3871 | ptl = huge_pte_lock(h, mm, ptep); | ||
3872 | if (!huge_pte_none(huge_ptep_get(ptep))) { | ||
3873 | ret = 0; | ||
3874 | spin_unlock(ptl); | ||
3875 | goto out; | ||
3876 | } | ||
3877 | spin_unlock(ptl); | ||
3859 | ret = vmf_error(PTR_ERR(page)); | 3878 | ret = vmf_error(PTR_ERR(page)); |
3860 | goto out; | 3879 | goto out; |
3861 | } | 3880 | } |