summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/mm
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: Mark nvgpu_pde_phys_addr staticAlex Waterman2017-11-16
| | | | | | | | | | | | | | | | | | nvgpu_pde_phys_addr() is only used in gmmu.c and as such can be marked static. JIRA NVGPU-402 Change-Id: I7adba6f54ebd4e06d176f23b9a959c04a8770338 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599040 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Merge remote-tracking branch 'remotes/origin/dev/linux-nvgpu-t19x' into ↵Deepak Nibade2017-11-16
|\ | | | | | | | | | | | | | | | | linux-nvgpu Bug 200363166 Change-Id: Ic662d7b44b673db28dc0aeba338ae67cf2a43d64 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
| * gpu: nvgpu: gv11b: Change license for common files to MITTerje Bergstrom2017-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I93c0504f0544ee8ced4898c386b3f5fbaa6a99a9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567804 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit
| * gpu: nvgpu: add support for L3 cache allocation of buffersDeepak Nibade2017-07-07
| | | | | | | | | | | | | | | | | | | | Add gv11b implementation of gpu_phys_addr() that checks the t19x GMMU attributes struct to determine if L3 allocation should be enabled. If L3 alloc is enabled then a special physical address bit is set. Add flag NVGPU_AS_MAP_BUFFER_FLAGS_L3_ALLOC to struct nvgpu_as_map_buffer_ex_args so that User space can add a hint to allocate buffer in L3 cache Jira GPUT19X-10 Bug 200279508 Change-Id: I1bb9876a670b252980922aa50e3e69b802be137f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master/r/1512602 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add synchronization to comptag alloc and clearingSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | Comptags allocation and clearing was not synchronized for a buffer. Fix this race by serializing the operations with the gk20a_dmabuf_priv lock. While doing that, add an error check in the cbc_ctrl call. Bug 1902982 Change-Id: Icd96f1855eb5e5340651bcc85849b5ccc199b821 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597904 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Always do full buffer compbits allocsSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | Remove parameter 'lines' from gk20a_alloc_or_get_comptags() and nvgpu_ctag_buffer_info. We're always doing full buffer allocs anyways. This simplifies the code a bit. Bug 1902982 Change-Id: Iacfc9cdba8cb75b31a7d44b175660252e09d605d Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597131 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Simplify compbits alloc and add needs_clearSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | | | | Simplify compbits alloc by making the alloc function re-callable for the buffer, and making it return the comptags info. This simplifies the calling code: alloc_or_get vs. get + alloc + get again. Add tracking whether the allocated compbits need clearing before they can be used in PTEs. We do this, since clearing is part of the gmmu map call on vgpu, which can fail. Bug 1902982 Change-Id: Ic4ab8d326910443b128e82491d302a1f49120f5b Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597130 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Clean up comptag data structs and allocSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up the comptag-related data structures and allocation logic. The most important change is that we only ever try comptag allocation once to prevent incorrect map aliasing. If we were to retry the allocation on further map calls, the following situation would become possible: (1) Request compressible kind mapping for a buffer. Comptag alloc failed and we proceed with incompressible kind fallback. (2) Request another compressible kind mapping for a buffer. Comptag alloc retry succeeded and now we use the compressible kind. (3) After writes through the compressible kind mapping, the buffer is no longer legible via the fallback incompressible kind mapping. The other changes are about removing the unused comptag-related fields in gk20a_comptags and nvgpu_mapped_buf, and retrieving comptags info only for compressible buffers. We also make nvgpu_ctag_buffer_info and nvgpu_vm_compute_compression as private mm/vm.c definitions, since they're not used elsewhere. Bug 1902982 Change-Id: I0c9fe48ccc585a80dd2c05ec606a079c1c1d41f1 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595153 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLsDeepak Nibade2017-11-15
| | | | | | | | | | | | | | | TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using combination of timeslice and interleave levels to decide the priority Hence remove the IOCTLs and all corresponding APIs Jira NVGPU-393 Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Include UAPI explicitlyTerje Bergstrom2017-11-13
| | | | | | | | | | | | | Add explicit #includes for <uapi/linux/nvgpu.h> for source code files that depend on it. JIRA NVGPU-259 Change-Id: I717d5f1493423fd3a7a34b6dd3380d33a9307a09 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596254 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: VM map path refactoringAlex Waterman2017-11-10
| | | | | | | | | | | | | | | | | | | | | | Final VM mapping refactoring. Move most of the logic in the VM map path to the common/mm/vm.c code and use the generic APIs previously implemented to deal with comptags and map caching. This also updates the mapped_buffer struct to finally be free of the Linux dma_buf and scatter gather table pointers. This is replaced with the nvgpu_os_buffer struct. JIRA NVGPU-30 JIRA NVGPU-71 JIRA NVGPU-224 Change-Id: If5b32886221c3e5af2f3d7ddd4fa51dd487bb981 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583987 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make buf alignment genericAlex Waterman2017-11-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drastically simplify and move the aligment computation for buffers getting mapped into the SGT code. An SGT is all that is needed for computing the alignment. However, this did require that a new SGT op was added: nvgpu_sgt_iommuable() This function returns true if the passed SGT is IOMMU'able and must be implemented by an SGT implementation that has IOMMU'able buffers. If this function is left as NULL then it is assumed that the buffer is not IOMMU'able. Also cleanup the parameter ordering convention among all nvgpu_sgt functions. Previously there was a mishmash of different parameter orderings. This patch now standardizes on the gk20a first approach seen everywhere else in the driver. JIRA NVGPU-30 JIRA NVGPU-246 JIRA NVGPU-71 Change-Id: Ic4ab7b752847cf795c7cfafed5a07818217bba86 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583985 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Request CONTIG allocs for large PDsAlex Waterman2017-11-08
| | | | | | | | | | | | | | | | | | | | Request explicitly contiguous DMA memory for large page directory allocations. Large in this case means greater than PAGE_SIZE. This is necessary if the GPU's DMA allocator is set to, by default, allocate discontiguous memory. Bug 2015747 Change-Id: I3afe9c2990522058f6aa45f28030bc82a369ca69 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593093 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for legacy mappingSami Kiminki2017-11-08
| | | | | | | | | | | | | | | | | | | | Make NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL mandatory for all map IOCTLs. We'll clean up the legacy kernel code in subsequent patches. Remove support for NVGPU_AS_IOCTL_MAP_BUFFER. It has been superseded by NVGPU_AS_IOCTL_MAP_BUFFER_EX. Remove legacy definitions to nvgpu_map_buffer_args and the related flags, and update the in-kernel map calls accordingly by switching to the newer definitions. Bug 1902982 Change-Id: Ie9a7f02b8d5d0ec7c3722c4481afab6d39b4fbd0 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560932 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Introduce queries for big page sizesTerje Bergstrom2017-11-08
| | | | | | | | | | | | | | Introduce query functions for default big page size and available big page sizes. Move initialization of GPU characteristics big page sizes to the GPU characteristics query function. JIRA NVGPU-259 Change-Id: Ie66cc2fbfcd88205593056f8d5010ac2539c8bc2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593685 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: drop user callback support in CEKonsta Holtta2017-11-07
| | | | | | | | | | | | | | | | | | | | | | | Simplify the copyengine code by deleting support for the ce_event_callback feature that has never been used. Similarly, create a channel without the finish callback to get rid of that Linux dependency, and delete the finish callback function as it now serves no purpose. Delete also the submitted_seq_number and completed_seq_number fields that are only written to. Jira NVGPU-259 Change-Id: I02d15bdcb546f4dd8895a6bfb5130caf88a104e2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589320 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove buffer_attrs structAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | Remove the buffer_attrs struct and replace it with a more streamlined nvgpu_ctag_buffer_info struct. This struct allows several different fields to all be passed by pointer to the various kind/compression functions in the VM map process. This path also moves several comptag/kind related functions to the core vm.c code since these functions can be reused by other OSes. Change-Id: I2a0f0a1c4b554ce4c8f2acdbe3161392e717d3bf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583984 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove user_mapped from mapped_bufAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the always true field 'user_mapped' from the mapped_buf struct. All mapped_bufs are mapped by a user request since they always originate from a dma_buf (for Linux, that is). As such there is a fair amount of logic that could be deleted. Linux specific: the own_mem_ref field was also be deleted. The logic of only storing a dma_buf ref when the buffer is mapped for the first time by a user is easy: when the mapped buffer is found in the map cache release the outstanding dma_buf ref taken earlier on in the map path. If the map cache does not have the buffer simply let the higher level map code keep the dma_buf ref. The dma_buf ref is released when the nvgpu_vm_unmap_system() call-back is called by the unmap path. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I229d136713812a7332bdadd5ebacd85d983bbbf0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583983 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: VM unmap refactoringAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-organize the unmap code to be better split between OS specific requirements and common core requirements. The new code flow works as follows: nvgpu_vm_unmap() Is the primary entrance to the unmap path. It takes a VM and a GPU virtual address to unmap. There's also an optional batch mapping struct. This function is responsible for making sure there is a real buffer and that if it's being called on a fixed mapping then the mapping will definitely be freed (since buffers are ref-counted). Then this function decrements the ref-count and returns. If the ref-count hits zero then __nvgpu_vm_unmap_ref() is called which just calls __nvgpu_vm_unmap() with the relevant batch struct if present. This is where the real work is done. __nvgpu_vm_unmap() clears the GMMU mapping, removes the mapped buffer from the various lists and trees it may be in and then calls the nvgpu_vm_unmap_system() function. This function handles any OS specific stuff and must be defined by all VM OS implementations. There's a a short cut used by some other core VM code to free mappings without going through nvgpu_vm_map(). Mostly they just directly decrement the mapping ref-count which can then call __nvgpu_vm_unmap_ref() if the ref-count hits zero. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: Ic626d37ab936819841bab45214f027b40ffa4e5a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583982 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix pte location functionsDavid Nieto2017-11-01
| | | | | | | | | | | | | | | | | | Modify the recursive loop in pte_find to make sure it is targeting the proper pde page size. JIRA NVGPUGV100-36 Change-Id: Ib3673d8d9f1bd3c907d532f9e2562ecdc5dda4af Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586739 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix fault in gk20a_comptag_allocator_destroyThomas Fleury2017-10-29
| | | | | | | | | | | | | | | | | | | | In gk20a_comptag_allocator_destroy, allocator->g may not be initialized. This leads to a NULL pointer dereference when enabling CONFIG_NVGPU_TRACK_MEM_USAGE. Use available g parameter instead. Bug 200352099 JIRA EVLR-1959 Change-Id: I9edda516bb88cced8e7d247261e52ba6594f3b2e Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586504 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Linux specific GPU characteristics flagsTerje Bergstrom2017-10-26
| | | | | | | | | | | | | Make GPU characteristics flags specific to Linux code only. The rest of driver is moved to using nvgpu_is_enabled() API. JIRA NVGPU-259 Change-Id: I2faf46ef64c964361c267887b28c9d19806d6d51 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583876 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Silence extra mm debug messagesTerje Bergstrom2017-10-26
| | | | | | | | | | | | common/mm/mm.c uses nvgpu_info() to log debug events. Replace that with nvgpu_dbg_info() to silence the messages. Change-Id: Iaa5b8192287e8392a32ceff2216faf12fd6d09c3 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585440 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Begin reorganizing VM mapping/unmappingAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move vm_priv.h to <nvgpu/linux/vm.h> and rename nvgpu_vm_map() to nvgpu_vm_map_linux(). Also remove a redundant unmap function from the unmap path. These changes are the beginning of reworking the nvgpu Linux mapping and unmapping code. The rest of this patch is just the necessary changes to use the new map function naming and the new path to the Linux vm header. Patch Series Goal ----------------- There's two major goals for this patch series. Note that these goals are not achieved in this patch. There will be subsequent patches. 1. Remove all last vestiges of Linux code from common/mm/vm.c 2. Implement map caching in the common/mm/vm.c code To accomplish this firstly the VM mapping code needs to have the struct nvgpu_mapped_buf data struct be completely Linux free. That means implementing an abstraction for this to hold the Linux stuff that mapped buffers carry about (SGT, dma_buf). This is why the vm_priv.h code has been moved: it will need to be included by the <nvgpu/vm.h> header so that the OS specific struct can be pulled into struct nvgpu_mapped_buf. Next renaming the nvgpu_vm_map() to nvgpu_vm_map_linux() is in preparation for adding a new nvgpu_vm_map() that handles the map caching with nvgpu_mapped_buf. The mapping code is fairly straight forward: nvgpu_vm_map does OS generic stuff; each OS then calls this function from an nvgpu_vm_map_<OS>() or the like that does any OS specific adjustments/management. Freeing buffers is much more tricky however. The maps are all reference counted since userspace does not track buffers and expects us to handle this instead. Ugh! Since there's ref-counts the free code will require a callback into the OS specific code since the OS specific code cannot free a buffer directly. THis make's the path for freeing a buffer quite convoluted. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I5e0975f60663a0d6cf0a6bd90e099f51e02c2395 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578896 GVS: Gerrit_Virtual_Submit Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix vidmem regressionDavid Nieto2017-10-23
| | | | | | | | | | | | | | Ensures all vidmem mutex are init bug 2004378 Change-Id: I2ffb1d8e99ecb269b36e5ea79d08db2021e54302 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583196 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add VIDMEM debuggingAlex Waterman2017-10-20
| | | | | | | | | | | | | | Add some VIDMEM debugging to help track the background free thread and allocs/frees. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88471b29d2a42c104666b111d0d3014110c9d56c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576330 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Convert VIDMEM work_struct to threadAlex Waterman2017-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the work_struct used by the vidmem background clearing to a thread to make it more cross platform. The thread waits on a condition variable to determine when work needs to be done. The signal comes from the DMA API when it enqueues a new nvgpu_mem that needs clearing. Add logic for handling suspend: the CE cannot be accessed while the GPU is suspended. As such the background thread must be paused while the GPU is suspended and the CE is not available. Several other changes were also made: o Move the code that enqueues a nvgpu_mem from the DMA API code to a function in the VIDMEM code. o Move nvgpu_vidmem_get_pending_alloc() to the Linux specific code as this function is only used there. It's a trivial function that QNX can easily implement as well. o Remove the was_empty logic from the enqueue. Now just always signal the condition variable when anew nvgpu_mem comes in. o Move CE suspend to after MM suspend. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574498 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move dma_buf usage from mm_gk20a.cAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move most of the dma_buf usage present in the mm_gk20a.c code out to Linux specific code and some commom/mm code. There's two primary groups of code: 1. dma_buf priv field code (for holding comptag data) 2. Comptag usage that relies on dma_buf pointers For (1) the dma_buf code was simply moved to common/linux/dmabuf.c since most of this code is clearly Linux specific. The comptag code was a bit more complicated since there is two parts to the comptag code. Firstly there's the code that manages the comptag memory. This is essentially a simple allocator. This was moved to common/mm/comptags.c since it can be shared across all chips. The second set of code is moved to common/linux/comptags.c since it is the interface between dma_bufs and the comptag memory. Two other fixes were done as well: - Add struct gk20a to the comptag allocator init so that the proper nvgpu_vzalloc() function could be used. - Add necessary includes to common/linux/vm_priv.h. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566628 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: compile nvgpu allocator for QNXSourab Gupta2017-10-17
| | | | | | | | | | | | | | | | The patch has the changes for compilation of common nvgpu allocator for QNX. This includes some cross-OS compilation changes and removing some Linux'isms from the allocator. Change-Id: Ib1ecceec77b497513a196597bff4441615577548 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540306 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_allocAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs still need to be used, however, since sharing buffers in userspace is done by dma_buf FD. The best way to pass the vidmem buf through the dma_buf is by SGL pointer. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Separate vidmem alloc from Linux codeAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Split the core vidmem allocation from the Linux component of vidmem allocation. The core vidmem allocation allocates the nvgpu_mem struct that defines the vidmem buffer in the core MM code. The Linux code now allocates some Linux specific stuff (dma_buf, etc) and also allocates the core vidmem buf. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename vidmem APIsAlex Waterman2017-10-13
| | | | | | | | | | | | | | | Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure consistency and that all the non-static vidmem functions are properly namespaced. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add syncpoint read mapseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | For sync-point read map: 1. Added nvgpu_mem memory allocator in gk20a struct and allocated memory for this in gk20a_finalize_poweron() and freed this memory in gk20a_remove(). 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct for read map in vm. Added nvgpu_quiesce() in nvgpu_remove() before freeing syncpoint read map to ensure that nvgpu is idle. JIRA GPUT19X-2 Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1514338 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VIDMEM support from mm_gk20a.cAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Split VIDMEM support into its own code files organized as such: common/mm/vidmem.c - Base vidmem support common/linux/vidmem.c - Linux specific user-space interaction include/nvgpu/vidmem.h - Vidmem API definitions Also use the config to enable/disable VIDMEM support in the makefile and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible from the source code. And lastly update a while-loop that iterated over an SGT to use the new for_each construct for iterating over SGTs. Currently this organization is not perfectly adhered to. More patches will fix that. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix double iteration in gmmu updateKonsta Holtta2017-10-10
| | | | | | | | | | | | | | | | | | | | __nvgpu_gmmu_do_update_page_table() uses nvgpu_sgt_for_each_sgl to loop through the entries of a buffer to be mapped, so when continue is used, the sgl entry must not be reassigned again like it was before with a pure while-and-update loop. Delete a reassignment to fix a case where sgl = sgl->next could happen twice. Bug 2002279 Bug 2001466 Change-Id: I47c8b853d4b35304740cd4e8a840df02fcd23054 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575279 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Timo Alho <talho@nvidia.com>
* gpu: nvgpu: Combine VIDMEM and SYSMEM map pathsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Combine the VIDMEM and SYSMEM GMMU mapping paths into one single function. With the proper nvgpu_sgt abstraction in place the high level mapping code can treat these two types of maps identically. The benefits of this are immediate: a change to the mapping code will have much more testing coverage - the same code runs on both vidmem and sysmem. This also makes it easier to make changes to mapping code since the changes will be testable no matter what type of mem is being mapped. JIRA NVGPU-68 Change-Id: I308eda03d426966ba8e1d4892c99cf97b45c5993 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566706 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add for_each construct for nvgpu_sgtsAlex Waterman2017-10-04
| | | | | | | | | | | | | Add a macro to iterate across nvgpu_sgts. This makes it easier on developers who may accidentally forget to move to the next SGL. JIRA NVGPU-243 Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566627 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove sg_phys() from GMMU codeAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | Remove the last sg_phys() call from the GMMU code and replace it with a generic nvgpu_mem API. This new API, nvgpu_mem_get_phys_addr(), returns the physical address of an nvgpu_mem struct. Also, implement this new API in the Linux specific nvgpu_mem code since it requires access to the underlying SGT/SGL. JIRA NVGPU-68 Change-Id: Idf88701a2a8515464c658c26e0de493c82ff850d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542964 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix license of nvgpu_mem and pmu_debugTerje Bergstrom2017-10-03
| | | | | | | | | | | | | | | nvgpu_mem and pmu_debug should be MIT licensed. Change the license boilerplate. JIRA NVGPU-218 Change-Id: I7750368674faa4c4e8bf071e136b80fd53d9a0c4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568779 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix blockcount in lockless allocatorAlex Waterman2017-09-07
| | | | | | | | | | | Make sure that the block count is the length / block_size since the length is passed in bytes. Change-Id: Ibb132b16b70b9cd7117c441f37a7947052d279ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1538976 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: NVGPU abstraction for ACCESS_ONCEDebarshi Dutta2017-09-03
| | | | | | | | | | | | | | | | Construct a wrapper macro NV_ACCESS_ONCE(x) which uses OS specific versions of ACCESS_ONCE. e.g for linux, ACCESS_ONCE(x) is used. Jira NVGPU-125 Change-Id: Ia5c67baae111c1a7978c530bf279715fc808287d Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549928 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cleanup allocator debuggingAlex Waterman2017-08-28
| | | | | | | | | | | | | | | | | Remove debugging features that did not really get used and make the debugging code use the nvgpu_log() functionality. This ties the allocator debugging into the larger nvgpu debug framework. Also modify many of the places CONFIG_DEBUG_FS was used to conditionally compile allocator debug code to use __KERNEL__ instead. This is because that debug code can still be called even when debugfs is not present in Linux. Change-Id: I112ebe1cae22d6f8db96d023993498093e18d74a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544439 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove unused tracing from allocatorsAlex Waterman2017-08-28
| | | | | | | | | | | | | | | Remove an unused tracing feature from the allocator code. This should make multi-OS support slightly easier. Change-Id: I8b57f037b66227f2110e457bff7aa6b443d89e82 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544438 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>