summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/mm/vm.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Replace kref for refcounting in nvgpuDebarshi Dutta2017-08-24
| | | | | | | | | | | | | | | | - added wrapper struct nvgpu_ref over nvgpu_atomic_t - added nvgpu_ref_* APIs to access the above struct JIRA NVGPU-140 Change-Id: Id47f897995dd4721751f7610b6d4d4fbfe4d6b9a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540899 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Implement PD packingAlex Waterman2017-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases page directories require less than a full page of memory. For example, on Pascal, the final PD level for large pages is only 256 bytes; thus 16 PDs can fit in a single page. To allocate an entire page for each of these 256 B PDs is extremely wasteful. This patch aims to alleviate the wasted DMA memory from having small PDs in a full page by packing multiple small PDs into a single page. The packing is implemented as a slab allocator - each page is a slab and from each page multiple PD instances can be allocated. Several modifications to the nvgpu_gmmu_pd struct also needed to be made to support this. The nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem struct so that each nvgpu_gmmu_pd knows what portion of the memory it's using. The nvgpu_pde_phys_addr() function and the pd_write() functions also require some changes since the PD no longer is always situated at the start of the nvgpu_mem. Initialization and cleanup of the page tables for each VM was slightly modified to work through the new pd_cache implementation. Some PDs (i.e the PDB), despite not being a full page, still require a full page for alignment purposes (HW requirements). Thus a direct allocation method for PDs is still provided. This is also used when a PD that could in principle be cached is greater than a page in size. Lastly a new debug flag was added for the pd_cache code. JIRA NVGPU-30 Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master/r/1506610 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: gmmu programming rewriteAlex Waterman2017-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the high level mapping logic. Instead of iterating over the GPU VA iterate over the scatter-gather table chunks. As a result each GMMU page table update call is simplified dramatically. This also modifies the chip level code to no longer require an SGL as an argument. Each call to the chip level code will be guaranteed to be contiguous so it only has to worry about making a mapping from virt -> phys. This removes the dependency on Linux that the chip code currently has. With this patch the core GMMU code still uses the Linux SGL but the logic is highly transferable to a different, nvgpu specific, scatter gather list format in the near future. The last major update is to push most of the page table attribute arguments to a struct. That struct is passed on through the various mapping levels. This makes the funtions calls more simple and easier to follow. JIRA NVGPU-30 Change-Id: Ibb6b11755f99818fe642622ca0bd4cbed054f602 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master/r/1484104 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Fixup formatting in VM codeAlex Waterman2017-06-13
| | | | | | | | | | | | Fixup a minor spacing issue in the VM code. Change-Id: Ia4aaa3cdd7cb63958870cebb51821bf640d56123 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1499446 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Separate GMMU mapping impl from mm_gk20a.cAlex Waterman2017-06-06
| | | | | | | | | | | | | | | Separate the non-chip specific GMMU mapping implementation code out of mm_gk20a.c. This puts all of the chip-agnostic code into common/mm/gmmu.c in preparation for rewriting it. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I6f7fdac3422703f5e80bb22ad304dc27bba4814d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480228 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove extraneous VM init/deinit APIsAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | | | | Support only VM pointers and ref-counting for maintaining VMs. This dramatically reduces the complexity of the APIs, avoids the API abuse that has existed, and ensures that future VM usage is consistent with current usage. Also remove the combined VM free/instance block deletion. Any place where this was done is now replaced with an explict free of the instance block and a nvgpu_vm_put(). JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: Ib73e8d574ecc9abf6dad0b40a2c5795d6396cc8c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480227 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Unify vm_init for vGPU and regular GPUAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | Unify the initialization routines for the vGPU and regular GPU paths. This helps avoid any further code divergence. This also assumes that the code running on the regular GPU essentially works for the vGPU. The only addition is that the regular GPU path calls an API in the vGPU code that sends the necessary RM server message. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I37af1993fd8b50f666ae27524d382cce49cf28f7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix big_pages logic in nvgpu_vm_initAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | | | | | | | Fix the way big_pages is handled in nvgpu_vm_init(). Prior to this patch big_pages could wind up being true even if disable_bigpages is set in the mm struct. Clearly this is wrong. The logic is now simplified a little bit and makes it so that if the disable_bigpages field is set then the resulting VM created by nvgpu_vm_init() has only one user area (no need for a second LP user area) and the big_pages field for the VM is set to false. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: If5dc7fcf3fa4e070f87295406f0afe414269b702 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1493318 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move unify_address_spaces to flagsAlex Waterman2017-05-30
| | | | | | | | | | | | | | | | | Use the enabled flags API to handle the unify_address_sapce spaces flag. JIRA NVGPU-84 JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: Id1b59aed4b349d6067615991597d534936cc5ce9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1488307 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add and use VM init/deinit APIsAlex Waterman2017-05-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the VM init/de-init from the HAL and instead use a single set of routines that init/de-init VMs. This prevents code divergence between vGPUs and regular GPUs. This patch also clears up the naming of the routines a little bit. Since some VMs are used inplace and others are dynamically allocated the APIs for freeing them were confusing. Also some free calls also clean up an instance block (this is API abuse - but this is how it currently exists). The new API looks like this: void __nvgpu_vm_remove(struct vm_gk20a *vm); void nvgpu_vm_remove(struct vm_gk20a *vm); void nvgpu_vm_remove_inst(struct vm_gk20a *vm, struct nvgpu_mem *inst_block); void nvgpu_vm_remove_vgpu(struct vm_gk20a *vm); int nvgpu_init_vm(struct mm_gk20a *mm, struct vm_gk20a *vm, u32 big_page_size, u64 low_hole, u64 kernel_reserved, u64 aperture_size, bool big_pages, bool userspace_managed, char *name); void nvgpu_deinit_vm(struct vm_gk20a *vm); JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: Ia4016384c54746bfbcaa4bdd0d29d03d5d7f7f1b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477747 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor VM init/cleanupAlex Waterman2017-05-26
| | | | | | | | | | | | | | | | | Refactor the API for initializing and cleaning up VMs. This also involved moving a bunch of GMMU code out into the gmmu code since part of initializing a VM involves initializing the page tables for the VM. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I4710f08c26a6e39806f0762a35f6db5c94b64c50 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477746 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor gk20a_vm_alloc_va()Alex Waterman2017-05-24
| | | | | | | | | | | | | | | | This function is an internal function to the VM manager that allocates virtual memory space in the GVA allocator. It is unfortunately used in the vGPU code, though. In any event, this patch cleans up and moves the implementation of these functions into the VM common code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477745 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split vm_area management into vm codeAlex Waterman2017-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | The vm_reserve_va_node struct is essentially a special VM area that can be used for sparse mappings and fixed mappings. The name of this struct is somewhat confusing (as node is typically used for list items). Though this struct is a part of a list it doesn't really make sense to call this a list item since it's much more. Based on that the struct has been renamed to nvgpu_vm_area to capture the actual use of the struct more accurately. This also moves all of the management code of vm areas to a new file devoted solely to vm_area management. Also add a brief overview of the VM architecture. This should help other people follow along the hierachy of ownership and lifetimes in the rather complex MM code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: If85e1cf868031d0dc265e7bed50b58a2aed2602e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477744 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VM implementation outAlex Waterman2017-05-19
| | | | | | | | | | | | | | | | | | | | This patch begins splitting out the VM implementation from mm_gk20a.c and moves it to common/linux/vm.c and common/mm/vm.c. This split is necessary because the VM code has two portions: first, an interface for the OS specific code to use (i.e userspace mappings), and second, a set of APIs for the driver to use (init, cleanup, etc) which are not OS specific. This is only the beginning of the split - there's still a lot of things that need to be carefully moved around. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I3b57cba245d7daf9e4326a143b9c6217e0f28c96 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477743 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VM interface outAlex Waterman2017-05-19
This patch begins the major rework of the GPU's virtual memory manager (VMM). The VMM is the piece of code that handles the userspace interface to buffers and their mappings into the GMMU. The core data structure is the VM - for now still known as 'struct vm_gk20a'. Each one of these structs represents one addres space to which channels or TSGs may bind themselves to. The VMM splits the interface up into two broad categories. First there's the common, OS independent interfaces; and second there's the OS specific interfaces. OS independent -------------- This is the code that manages the lifetime of VMs, the buffers inside VMs (search, batch mapping) creation, destruction, etc. OS Specific ----------- This handles mapping of buffers represented as they are represented by the OS (dma_buf's for example on Linux). This patch is by no means complete. There's still Linux specific functions scattered in ostensibly OS independent code. This is the first step. A patch that rewrites everything in one go would simply be too big to effectively review. Instead the goal of this change is to simply separate out the basic OS specific and OS agnostic interfaces into their own header files. The next series of patches will start to pull the relevant implementations into OS specific C files and common C files. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I242c7206047b6c769296226d855b7e44d5c4bfa8 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1464939 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>