summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/mm_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Enable IO coherency on GV100Alex Waterman2018-03-07
| | | | | | | | | | | | | | This reverts commit 848af2ce6de6140323a6ffe3075bf8021e119434. This is a revert of a revert, etc, etc. It re-enables IO coherence again. JIRA EVLR-2333 Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669722 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""Timo Alho2018-03-05
| | | | | | | | | | | This reverts commit 89fbf39a05483917c0a9f3453fd94c724bc37375. Bug 2075315 Change-Id: Id34a0376be5160b164931926ec600f77edf69667 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1668487 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
* Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""Alex Waterman2018-03-03
| | | | | | | | | | | | | | | This reverts commit 5a35a95654d561fce09a3b9abf6b82bb7a29d74b. JIRA EVLR-2333 Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667167 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"Alex Waterman2018-02-28
| | | | | | | | | | | | | | Also revert other changes related to IO coherence. This may be the culprit in a recent dev-kernel lockdown. Bug 2070609 Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665914 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
* gpu: nvgpu: Use coherent aperture flagAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a coherent DMA API wee must make sure to program any aperture fields with the coherent aperture setting. To do this the nvgpu_aperture_mask() function was modified to take a third aperture mask argument, a coherent setting, so that code can use this function to generate coherent aperture settings. The aperture choice is some what tricky: the default version of this function uses the state of the DMA API to determine what aperture to use for SYSMEM: either coherent or non-coherent internally. Thus a kernel user need only specify the normal nvgpu_mem struct and the correct mask should be chosen. Due to many uses of nvgpu_mem structs not created directly from the DMA API wrapper it's easier to translate SYSMEM to SYSMEM_COH after creation. However, the GMMU mapping code, will encounter buffers from userspace with difference coerency attributes than the DMA API. Thus the __nvgpu_aperture_mask() really respects the aperture setting passed in regardless of the DMA API state. This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT since this is either passed in from userspace or set by the kernel when using coherent DMA. The aperture field in attrs is upgraded to coh if this flag is set. This change also adds a coherent sysmem mask everywhere that it can. There's a couple places that do not have a coherent register field defined yet. These need to eventually be defined and added. Lastly the aperture mask code has been mvoed from the Linux vm.c code to the general vm.c code since this function has no Linux dependencies. Note: depends on https://git-master.nvidia.com/r/1664536 for new register fields. JIRA EVLR-2333 Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655220 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup map attributes debuggingAlex Waterman2018-02-22
| | | | | | | | | | | Make the map attributes printed by map debug code are more easily readable and consistent. Change-Id: I9737131a2ea44c6a080dff0095929760888b83ae Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654518 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update gk20a pde bit coverage functionAlex Waterman2018-01-19
| | | | | | | | | | | | | | | | | The mm_gk20a.c function that returns number of bits that a PDE covers is very useful for determing PDE size for all chips. Copy this into the common VM code since this applies to all chips/platforms. Bug 200105199 Change-Id: I437da4781be2fa7c540abe52b20f4c4321f6c649 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639730 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix indexing in locate pte functionDavid Nieto2017-12-05
| | | | | | | | | | | | | | | | | | | The current code does not properly calculate the indexes within the PDE to access the proper entry, and it has a bug in assignement of the big page entries. This change fixes the issue by: (1) Passing a pointer to the level structure and dereferencing the index offset to the next level. (2) Changing the format of the address. (3) Ensuring big pages are only selected if their address is set. Bug 200364599 Change-Id: I46e32560ee341d8cfc08c077282dcb5549d2a140 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1610562 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Bhosale <dbhosale@nvidia.com>
* gpu: nvgpu: Introduce include/nvpgu/sizes.hTerje Bergstrom2017-12-01
| | | | | | | | | | | | We use SZ_* #defines in some parts of nvgpu, but we don't explicitly include a header that defines it. Add include/nvgpu/sizes.h that in Linux #includes linux/sizes.h. Change-Id: I8f506d85c7eaa12e649f5874a87533e2f0fe9438 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1607575 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move platform_gk20a.h to linuxDeepak Nibade2017-11-02
| | | | | | | | | | | | | | | | | | | | | Move gk20a/platform_gk20a.h to linux specific directory as common/linux/platform_gk20a.h since this file includes all linux specific stuff Fix #includes in all the files to include this file with correct path Remove #include of this file where it is no more needed Fix gk20a_init_sim_support() to receive struct gk20a as parameter instead of receiving linux specific struct platform_device NVGPU-316 Change-Id: I5ec08e776b753af4d39d11c11f6f068be2ac236f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589938 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix pte location functionsDavid Nieto2017-11-01
| | | | | | | | | | | | | | | | | | Modify the recursive loop in pte_find to make sure it is targeting the proper pde page size. JIRA NVGPUGV100-36 Change-Id: Ib3673d8d9f1bd3c907d532f9e2562ecdc5dda4af Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586739 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Delete os_linux.h include in mm_gk20a.cAlex Waterman2017-10-27
| | | | | | | | | | | | | | | | | Delete this Linux include from mm_gk20a.c since it is no longer needed! JIRA NVGPU-30 Change-Id: Idb25fce221dbda0936cad4bae3785f7ecf26a1ed Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586330 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add cache maintenance timeout overrideDavid Nieto2017-10-23
| | | | | | | | | | | | Add functions to get per-chip cache maintenance timeout overrides. JIRA: NVGPUGV100-GV100 Change-Id: Ie14efc616e7af52ede60031c789bd2ae70857a6e Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1582768 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Convert VIDMEM work_struct to threadAlex Waterman2017-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the work_struct used by the vidmem background clearing to a thread to make it more cross platform. The thread waits on a condition variable to determine when work needs to be done. The signal comes from the DMA API when it enqueues a new nvgpu_mem that needs clearing. Add logic for handling suspend: the CE cannot be accessed while the GPU is suspended. As such the background thread must be paused while the GPU is suspended and the CE is not available. Several other changes were also made: o Move the code that enqueues a nvgpu_mem from the DMA API code to a function in the VIDMEM code. o Move nvgpu_vidmem_get_pending_alloc() to the Linux specific code as this function is only used there. It's a trivial function that QNX can easily implement as well. o Remove the was_empty logic from the enqueue. Now just always signal the condition variable when anew nvgpu_mem comes in. o Move CE suspend to after MM suspend. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574498 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move dma_buf usage from mm_gk20a.cAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move most of the dma_buf usage present in the mm_gk20a.c code out to Linux specific code and some commom/mm code. There's two primary groups of code: 1. dma_buf priv field code (for holding comptag data) 2. Comptag usage that relies on dma_buf pointers For (1) the dma_buf code was simply moved to common/linux/dmabuf.c since most of this code is clearly Linux specific. The comptag code was a bit more complicated since there is two parts to the comptag code. Firstly there's the code that manages the comptag memory. This is essentially a simple allocator. This was moved to common/mm/comptags.c since it can be shared across all chips. The second set of code is moved to common/linux/comptags.c since it is the interface between dma_bufs and the comptag memory. Two other fixes were done as well: - Add struct gk20a to the comptag allocator init so that the proper nvgpu_vzalloc() function could be used. - Add necessary includes to common/linux/vm_priv.h. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566628 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename vidmem APIsAlex Waterman2017-10-13
| | | | | | | | | | | | | | | Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure consistency and that all the non-static vidmem functions are properly namespaced. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove SGL reference from mm_gk20a.cAlex Waterman2017-10-11
| | | | | | | | | | | | | | | | | | | | Remove an SGL reference from the mm_gk20a.c code. This code is common code and as such all linuxisms need to be fixed. It just so happens that this particular function is only used by the CDE code which is only present in Linux. So simply move this function over to the CDE code. JIRA NVGPU-30 JIRA NVGPU-225 JIRA NVGPU-226 Change-Id: Ifb0cb427c742c6d9cada382ace4a52f52474c379 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576436 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove phys_addr_t from common codeAlex Waterman2017-10-11
| | | | | | | | | | | | | | | Remove phys_addr_t from common code and replace it with u64. This faciliates QNX compiling the common code since phys_addr_t is a Linux specific type. JIRA NVGPU-30 JIRA NVGPU-226 Change-Id: I15fe2078f9cd0b07c7e90ad6e359c493afa56714 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576432 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VIDMEM support from mm_gk20a.cAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Split VIDMEM support into its own code files organized as such: common/mm/vidmem.c - Base vidmem support common/linux/vidmem.c - Linux specific user-space interaction include/nvgpu/vidmem.h - Vidmem API definitions Also use the config to enable/disable VIDMEM support in the makefile and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible from the source code. And lastly update a while-loop that iterated over an SGT to use the new for_each construct for iterating over SGTs. Currently this organization is not perfectly adhered to. More patches will fix that. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add common vaddr translate functionAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Add a function to do address translation for IOMMU capable GPUs. When an iGPU is behind and IOMMU it can pick whether to use that IOMMU for translation by adding a bit to physical addresses. This function takes care of that. However, this required an abstracted nvgpu_iommuable() API to check whether a GPU is behind an IOMMU. This patch adds that API for Linux. JIRA NVGPU-68 Change-Id: I489d14475167c019c294407372395df78c8b5feb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542965 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allow bind to be interruptedDavid Nieto2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change solves two problems: (*) the possibility of a crash due to interrupting the gpu initialization following a bind (*) a IOVA memory leak that could prevent the GPU from binding after about 200 bind/unbind cycles A detailed list of fixes: - chek that arbiter is initialized before freeing it. - do not re-enable interrupts when MSI is enabled on unbind. - free the semaphore sea on unbind. - ensure we dont double load the vbios. - check return value of nvgpu_mutex_init for semaphores. - add corresponding nvgpu_mutex_destroy calls. bug 1816516 Change-Id: Ia8af73019e0e1183998855d55bb3eea09672a8b7 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1465302 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: David Jarrett <djarrett@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563019 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Direct GMMU PTE kind controlSami Kiminki2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow userspace to control directly the PTE kind for the mappings by supplying NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL for MAP_BUFFER_EX. In particular, in this mode, the userspace will tell the kernel whether the kind is compressible, and if so, what is the incompressible fallback kind. By supplying only the compressible kind, the userspace can require that the map kind will not be demoted to the incompressible fallback kind in case of comptag allocation failure. Add also a GPU characteristics flag NVGPU_GPU_FLAGS_SUPPORT_MAP_DIRECT_KIND_CTRL to signal whether direct kind control is supported. Fix indentation of nvgpu_as_map_buffer_ex_args header comment. Bug 1705731 Change-Id: I317ab474ae53b78eb8fdd31bd6bca0541fcba9a4 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1543462 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: page align DMA allocsAlex Waterman2017-09-12
| | | | | | | | | | | | | | | | | | | | | | Explicitly page align DMA memory allocations. Non-page aligned DMA allocs can lead to nvgpu_mem structs that have a size that's not page aligned. Those allocs, in some cases, can cause vGPU maps to return an error. More generally DMA allocs in Linux are never going to not be page aligned both in size and address. So might as well page align the alloc size to make code else where in the driver more simple. To imlpement this an aligned_size field has been added to struct nvgpu_mem. This field has the real page aligned size of the allocation. The original size is still saved in size. Change-Id: Ie08cfc4f39d5f97db84a54b8e314ad1fa53b72be Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547902 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove mapping of comptags to user VATerje Bergstrom2017-09-07
| | | | | | | | | | | | | | Remove the kernel feature to map compbit backing store areas to GPU VA. The feature is not used, and the relevant user space code is getting removed, too. Change-Id: I94f8bb9145da872694fdc5d3eb3c1365b2f47945 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547898 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Replace kref for refcounting in nvgpuDebarshi Dutta2017-08-24
| | | | | | | | | | | | | | | | - added wrapper struct nvgpu_ref over nvgpu_atomic_t - added nvgpu_ref_* APIs to access the above struct JIRA NVGPU-140 Change-Id: Id47f897995dd4721751f7610b6d4d4fbfe4d6b9a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540899 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove support for old kernel versionTerje Bergstrom2017-08-22
| | | | | | | | | | | Remove support for pre-4.4 kernels. This allows deleting the checks for kernel version, and usage of linux/version.h. Change-Id: I4d6cb30512ea164d27549f4f4d096e5931bb1379 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1543499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add wrapper over atomic_t and atomic64_tDebarshi Dutta2017-08-17
| | | | | | | | | | | | | | | | | - added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over atomic_t and atomic64_t - added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above wrappers. JIRA NVGPU-121 Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533044 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Remove mm.get_iova_addrAlex Waterman2017-08-04
| | | | | | | | | | | | | | | | | | | | | | Remove the mm.get_iova_addr() HAL and replace it with a new HAL called mm.gpu_phys_addr(). This new HAL provides the real phys address that should be passed to the GPU from a physical address obtained from a scatter list. It also provides a mechanism by which the HAL code can add extra bits to a GPU physical address based on the attributes passed in. This is necessary during GMMU page table programming. Also remove the flags argument from the various address functions. This flag was used for adding an IO coherence bit to the GPU physical address which is not supported. JIRA NVGPU-30 Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530864 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: avoid possible ovrflw in dmabuf checkPeter Daifuku2017-07-28
| | | | | | | | | | | | | | | | | | | | In gk20a_vm_map_buffer, when checking dmabuf size, avoid possible overflow of buffer offset + buffer size Bug 1793926 Change-Id: Iaa85bbd2942546015a233f34388309c6ba01412c Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1488051 (cherry picked from commit 62346ede6c0863d36dc5d91527647130a13eff53) Reviewed-on: http://git-master/r/1501696 (cherry picked from commit 745c273ac80fad14f019b7c59bb797c4e22f4781) Reviewed-on: https://git-master.nvidia.com/r/1528182 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: check for buffer overflow when mappingPeter Daifuku2017-07-27
| | | | | | | | | | | | | | | | | In gk20a_vm_map_buffer, return an error if the buffer size is less than offset + mapping size. Bug 1793926 Change-Id: I2209de6a6f2e2b3bd8830659208d6f88bbedc00d Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1484442 (cherry picked from commit 7e6a80cb4684a3e2534bc68cba4c1612a845a8f3) Reviewed-on: http://git-master/r/1488138 (cherry picked from commit 3331f6e47f1d214ad6aaf08ae3e7d241e31d6638) Reviewed-on: https://git-master.nvidia.com/r/1501677 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add ops and var for t19x mmu faultSeema Khowala2017-07-08
| | | | | | | | | | | | | Add new ops and fields required for t19x mmu fault JIRA GPUT19X-7 JIRA GPUT19X-12 Change-Id: I29694c15ff9a4150bb1737adac6b58ccba76bea4 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master/r/1492640 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move dev field from gk20a to nvgpu_os_linuxTerje Bergstrom2017-07-07
| | | | | | | | | | | | | Move field "struct device *dev" from struct gk20a to struct nvgpu_os_linux. The field is valid only for Linux. JIRA NVGPU-38 Change-Id: I09286aa3a9c5a2406e5a27c1fbf21b2c515b4dd4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1514162 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup GMMU debug printingAlex Waterman2017-07-07
| | | | | | | | | | | | | | | | | | | | | | | Ensure that all debug prints are consistent from chip to chip and function to function. The following maps letters in the debug print to their meaning: C Mapping is cachable v Mapping is volatile S Mapping is sparse P Mapping is private (VPR/WPR) c Mapping is coherent V Mapping is valid JIRA NVGPU-30 Change-Id: Ia890af88677c3e6d3fdd8c4fe266158c35b8afcd Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master/r/1514903 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Tested-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Implement PD packingAlex Waterman2017-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases page directories require less than a full page of memory. For example, on Pascal, the final PD level for large pages is only 256 bytes; thus 16 PDs can fit in a single page. To allocate an entire page for each of these 256 B PDs is extremely wasteful. This patch aims to alleviate the wasted DMA memory from having small PDs in a full page by packing multiple small PDs into a single page. The packing is implemented as a slab allocator - each page is a slab and from each page multiple PD instances can be allocated. Several modifications to the nvgpu_gmmu_pd struct also needed to be made to support this. The nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem struct so that each nvgpu_gmmu_pd knows what portion of the memory it's using. The nvgpu_pde_phys_addr() function and the pd_write() functions also require some changes since the PD no longer is always situated at the start of the nvgpu_mem. Initialization and cleanup of the page tables for each VM was slightly modified to work through the new pd_cache implementation. Some PDs (i.e the PDB), despite not being a full page, still require a full page for alignment purposes (HW requirements). Thus a direct allocation method for PDs is still provided. This is also used when a PD that could in principle be cached is greater than a page in size. Lastly a new debug flag was added for the pd_cache code. JIRA NVGPU-30 Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master/r/1506610 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: gmmu programming rewriteAlex Waterman2017-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the high level mapping logic. Instead of iterating over the GPU VA iterate over the scatter-gather table chunks. As a result each GMMU page table update call is simplified dramatically. This also modifies the chip level code to no longer require an SGL as an argument. Each call to the chip level code will be guaranteed to be contiguous so it only has to worry about making a mapping from virt -> phys. This removes the dependency on Linux that the chip code currently has. With this patch the core GMMU code still uses the Linux SGL but the logic is highly transferable to a different, nvgpu specific, scatter gather list format in the near future. The last major update is to push most of the page table attribute arguments to a struct. That struct is passed on through the various mapping levels. This makes the funtions calls more simple and easier to follow. JIRA NVGPU-30 Change-Id: Ibb6b11755f99818fe642622ca0bd4cbed054f602 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master/r/1484104 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use accessor for finding struct deviceTerje Bergstrom2017-06-30
| | | | | | | | | | | | | Use dev_from_gk20a() accessor whenever accessing struct device * from struct gk20a. JIRA NVGPU-38 Change-Id: Ide9fca3a56436c8f62e7872580a766c4c1e2353e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507930 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Convert logging from dev_*() to nvgpu_*()Terje Bergstrom2017-06-30
| | | | | | | | | | | | Convert a few calls from dev_*() logging to nvgpu_*(). This reduces dependency to Linux specific struct device pointer. JIRA NVGPU-38 Change-Id: Ib51a6b1287db25b7dd4d164aec3ac75fa2801ebf Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507929 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Per chip default big page sizeTerje Bergstrom2017-06-30
| | | | | | | | | | | | | | Make default big page size query a HAL op instead of per-platform constant. This allows querying for default big page size without accessing Linux specific gk20a_platform structure. JIRA NVGPU-38 Change-Id: Ibfbd1319764fdae5fdb06700fb64d23f6f3dd01a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507928 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Remove gk20a supportTerje Bergstrom2017-06-30
| | | | | | | | | | | | Remove gk20a support. Leave only gk20a code which is reused by other GPUs. JIRA NVGPU-38 Change-Id: I3d5f2bc9f71cd9f161e64436561a5eadd5786a3b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507927 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Pass struct gk20a to busy and resumeTerje Bergstrom2017-06-20
| | | | | | | | | | | | | | Pass struct gk20a pointer to gk20a_busy_noresume() and gk20a_idle_nosuspend(). This reduces the number of dependencies to Linux specific struct device. JIRA NVGPU-38 Change-Id: I5e05be32e2376bc8be5402bb973c20e28c35a1c3 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1505177 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Separate GMMU mapping impl from mm_gk20a.cAlex Waterman2017-06-06
| | | | | | | | | | | | | | | Separate the non-chip specific GMMU mapping implementation code out of mm_gk20a.c. This puts all of the chip-agnostic code into common/mm/gmmu.c in preparation for rewriting it. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I6f7fdac3422703f5e80bb22ad304dc27bba4814d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480228 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove extraneous VM init/deinit APIsAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | | | | Support only VM pointers and ref-counting for maintaining VMs. This dramatically reduces the complexity of the APIs, avoids the API abuse that has existed, and ensures that future VM usage is consistent with current usage. Also remove the combined VM free/instance block deletion. Any place where this was done is now replaced with an explict free of the instance block and a nvgpu_vm_put(). JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: Ib73e8d574ecc9abf6dad0b40a2c5795d6396cc8c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480227 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Unify vm_init for vGPU and regular GPUAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | Unify the initialization routines for the vGPU and regular GPU paths. This helps avoid any further code divergence. This also assumes that the code running on the regular GPU essentially works for the vGPU. The only addition is that the regular GPU path calls an API in the vGPU code that sends the necessary RM server message. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I37af1993fd8b50f666ae27524d382cce49cf28f7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>