From 5a948ccca95bcecf9d1e81db02394134f8a18c38 Mon Sep 17 00:00:00 2001 From: Peter Daifuku Date: Wed, 30 Sep 2020 11:25:05 -0700 Subject: gpu: nvgpu: limit PD cache to < pgsize for linux For Linux, limit the use of the cache to entries less than the page size, to avoid potential problems with running out of CMA memory when allocating large, contiguous slabs, as would be required for non-iommmuable chips. Also, in nvgpu_pd_cache_do_free(), zero out entries only if iommu is in use and PTE entries use the cache (since it's the prefetch of invalid PTEs by iommu that needs to be avoided). Bug 3093183 Bug 3100907 Change-Id: I363031db32e11bc705810a7e87fc9e9ac1dc00bd Signed-off-by: Peter Daifuku Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2422039 Reviewed-by: automaticguardword Reviewed-by: Alex Waterman Reviewed-by: Dinesh T Reviewed-by: Satish Arora Reviewed-by: mobile promotions GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions --- drivers/gpu/nvgpu/common/mm/pd_cache.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) (limited to 'drivers/gpu/nvgpu/common/mm') diff --git a/drivers/gpu/nvgpu/common/mm/pd_cache.c b/drivers/gpu/nvgpu/common/mm/pd_cache.c index a5b3d134..8f7003e5 100644 --- a/drivers/gpu/nvgpu/common/mm/pd_cache.c +++ b/drivers/gpu/nvgpu/common/mm/pd_cache.c @@ -423,12 +423,19 @@ static void nvgpu_pd_cache_do_free(struct gk20a *g, * this just re-adds it. * * Since the memory used for the entries is still mapped, if - * igpu make sure the entries are invalidated so that the hw - * doesn't accidentally try to prefetch non-existent fb memory. + * iommu is being used, make sure PTE entries in particular + * are invalidated so that the hw doesn't accidentally try to + * prefetch non-existent fb memory. * - * TBD: what about dgpu? (Not supported in Drive 5.0) + * Notes: + * - The check for NVGPU_PD_CACHE_SIZE > PAGE_SIZE effectively + * determines whether PTE entries use the cache. + * - In the case where PTE entries ues the cache, we also + * end up invalidating the PDE entries, but that's a minor + * performance hit, as there are far fewer of those + * typically than there are PTE entries. */ - if (pd->mem->cpu_va != NULL) { + if (nvgpu_iommuable(g) && (NVGPU_PD_CACHE_SIZE > PAGE_SIZE)) { memset((void *)((u64)pd->mem->cpu_va + pd->mem_offs), 0, pentry->pd_size); } -- cgit v1.2.2