summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/linux
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Declare gk20a_user_*init() in ioctl.hTerje Bergstrom2017-10-10
| | | | | | | | | | | | | | The functions are gk20a_user_init() and gk20a_user_deinit() are defined in ioctl.c. Move declaration of the functions to new header file ioctl.h. JIRA NVGPU-259 Change-Id: If348b51e9032083f252a7c7717ed7bc153dbba52 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569696 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move gk20a->busy_lock to os_linuxTerje Bergstrom2017-10-10
| | | | | | | | | | | | | gk20a->busy_lock is a Linux specific rw_semaphore used only by Linux code. Move it to os_linux. JIRA NVGPU-259 Change-Id: I220a8a080a5050732683b875d3c1d0539ba0f40e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569695 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clean up channel open/release declaresDeepak Nibade2017-10-10
| | | | | | | | | | | | | | | | | | | Below APIs are already declared in ioctl_channel.h, and hence remove duplicate declaration from channel_gk20a.h gk20a_channel_open() gk20a_channel_ioctl() gk20a_channel_release() And move declaration of gk20a_channel_open_ioctl() from channel_gk20a.h to ioctl_channel.h Jira NVGPU-259 Change-Id: I46702ca481e41a19f92f4fe0169f95e31360abe0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573106 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support for pre-os FWDavid Nieto2017-10-10
| | | | | | | | | | | | | | | | | | | Pre-os firmware takes care, among others, of the control of FAN till the driver takes over its control. On some GPUs not enabling this FW can lead tp physical board damage, hence it is needed to run this firmware. JIRA: NVGPUGV100-9 Change-Id: I18d54cfd5eb64ecec79c5dae67ac8d5bb1facf36 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549035 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move PRAMIN functions to nvgpu_memTerje Bergstrom2017-10-10
| | | | | | | | | | | | | | PRAMIN batch access functions are only used by nvgpu_mem. The way the functions are written is Linux specific, so move the implementation from common PRAMIN code. JIRA NVGPU-259 Change-Id: I6e2aba08c98568c651a86fe8ca7f9f5220d67348 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569697 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VIDMEM support from mm_gk20a.cAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Split VIDMEM support into its own code files organized as such: common/mm/vidmem.c - Base vidmem support common/linux/vidmem.c - Linux specific user-space interaction include/nvgpu/vidmem.h - Vidmem API definitions Also use the config to enable/disable VIDMEM support in the makefile and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible from the source code. And lastly update a while-loop that iterated over an SGT to use the new for_each construct for iterating over SGTs. Currently this organization is not perfectly adhered to. More patches will fix that. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: change logging enum namesShashank Singh2017-10-04
| | | | | | | | | | | | | | | | | Add nvgpu prefix to logging enums. In debug mode QNX, Integrity have already a hashdef DEBUG and it is conflicting with logging enum DEBUG Change-Id: I882e566302842f8b79daf11d5f0850dec222cfea Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570193 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: skip clk gating prog for sim/emu.Deepak Goyal2017-10-04
| | | | | | | | | | | | | | | For Simualtion/Emulation platforms,clock gating should be skipped as it is not supported. Added new flags "can_"X"lcg" to check platform capability before doing SLCG,BLCG and ELCG. Bug 200314250 Change-Id: I4124d444a77a4c06df8c1d82c6038bfd457f3db0 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566049 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add common vaddr translate functionAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Add a function to do address translation for IOMMU capable GPUs. When an iGPU is behind and IOMMU it can pick whether to use that IOMMU for translation by adding a bit to physical addresses. This function takes care of that. However, this required an abstracted nvgpu_iommuable() API to check whether a GPU is behind an IOMMU. This patch adds that API for Linux. JIRA NVGPU-68 Change-Id: I489d14475167c019c294407372395df78c8b5feb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542965 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove sg_phys() from GMMU codeAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | Remove the last sg_phys() call from the GMMU code and replace it with a generic nvgpu_mem API. This new API, nvgpu_mem_get_phys_addr(), returns the physical address of an nvgpu_mem struct. Also, implement this new API in the Linux specific nvgpu_mem code since it requires access to the underlying SGT/SGL. JIRA NVGPU-68 Change-Id: Idf88701a2a8515464c658c26e0de493c82ff850d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542964 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Return -EINVAL for bad l2 flush argsAlex Waterman2017-10-04
| | | | | | | | | | | | | | When the L2 flush IOCTL gets no l2 flush and no fb flush we now return -EINVAL. This can sometimes happen if the user tries to just invalidate. Currently we do not support L2 invalidates only. Bug 1661242 Change-Id: I87f3259bfbd736b5f4222cfe7b3cfa4a6475389e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1227125 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use u64 instead of dma_addr_tSourab Gupta2017-09-26
| | | | | | | | | | | | | The patch modifies a common dma api (nvgpu_dma_alloc_flags_vid_at) to use u64 argument (which is OS agnostic) instead of dma_addr_t as the argument type which is Linux specific. Change-Id: I74a694b08364b4d9e2826ffaf4620b113604d1cf Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: prevent crash during unbindDavid Nieto2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | This change solves crashes during bind that were introduced in the driver during the OS unification refactoring due to lack of coverage of the remove() function. The fixes during remove are: (1) Prevent NULL dereference on GPUs with secure boot (2) Prevent NULL dereferences when fecs_trace is not enabled (3) Added PRAMIN blocker during driver removal if HW is no longer accesible (4) Prevent double free of debugfs nodes as they are handled on the debugfs_remove_recursive() call (5) quiesce() can now be called without checking is HW accesible flag is set (6) added function to free irq so no IRQ association is left on the driver after it is removed (7) prevent NULL dereference on nvgpu_thread_stop() if the thread is already stopped JIRA: EVLR-1739 Change-Id: I787d38f202d5267a6b34815f23e1bc88110e8455 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563005 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix kmem debuggingAlex Waterman2017-09-20
| | | | | | | | | | | | | | | | | | | | | | | | | Make the kmem debugging prints much more easily usable. Previously the prints would only take effect if the full tracking was enabled: CONFIG_NVGPU_TRACK_MEM_USAGE However, there are many times when just the debug prints would be nice to have by simply setting the log mask bit in the log mask. echo 0x80000 > /sys/kernel/debug/<gpu>/log_mask Also this change now uses the real nvgpu_log() function instead of the legacy nvgpu_dbg() function. This makes the logging appear with proper GPU printing as well. Change-Id: If545da3d357d38fe8252e7d548c6765b995cd3d7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560248 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add per-GPU total to DMA memory printsAlex Waterman2017-09-20
| | | | | | | | | | | | | Track the total amount of DMA memory currently outstanding for each GPU. Print this total in the DMA debugging/logging prints. Bug 1956137 Change-Id: I929598e5aa388ee84db0badb4eb9f7c6cbe030c7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1559518 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add DMA loggingAlex Waterman2017-09-20
| | | | | | | | | | | | | | Add logging prints for the DMA interface in nvgpu. These prints show size, aligned size, type of alloc (sysmem vs vidmem), and flags for the alloc. Bug 1956137 Change-Id: I3e15152959dbb256cb1679435a18ab6821f4cde3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1559376 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add nvgpu_mem_is_valid() callAlex Waterman2017-09-19
| | | | | | | | | | | | | | | | | | | | Add a function to check if an nvgpu_mem is allocated (valid) or not. Also fix possibly leaked state in nvgpu_mems when they fail to allocate. Also ensure that in the case of a failure to allocate no state is accidentally leaked to the caller. This should hopefully make it less likely that a caller thinks a buffer that failed to allocate is actually allocated. Change-Id: I43224ece7da84e63b2f43f36f04941126fabf3c7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1559419 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: check capability for reschedule runlist submit flagDavid Li2017-09-18
| | | | | | | | | | | | | | | | | | | | | | NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST is only used by realtime priority EGL context, which checks for CAP_SYS_NICE during context creation in userspace, so it wasn't secure against unprivileged program spoofing submit ioctl with this flag to stall GPU progress of others. This flag does increase duration of submit by approx 16us, mostly due to register accesses and PMU FIFO mutex. Bug 1989493 Bug 1854791 Bug 1968813 Change-Id: I086b1d14f286abf8bd2d2dfae5945974b7fe6d1f Reviewed-on: https://git-master.nvidia.com/r/#/c/1558644 Signed-off-by: David Li <davli@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1558644 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Direct GMMU PTE kind controlSami Kiminki2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow userspace to control directly the PTE kind for the mappings by supplying NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL for MAP_BUFFER_EX. In particular, in this mode, the userspace will tell the kernel whether the kind is compressible, and if so, what is the incompressible fallback kind. By supplying only the compressible kind, the userspace can require that the map kind will not be demoted to the incompressible fallback kind in case of comptag allocation failure. Add also a GPU characteristics flag NVGPU_GPU_FLAGS_SUPPORT_MAP_DIRECT_KIND_CTRL to signal whether direct kind control is supported. Fix indentation of nvgpu_as_map_buffer_ex_args header comment. Bug 1705731 Change-Id: I317ab474ae53b78eb8fdd31bd6bca0541fcba9a4 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1543462 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support platform specific TSG enable/disableDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | Add platform specific operations to enable/disable a TSG and use them instead of directly calling enable/disable APIs For gm20b/gp106/gp10b we continue to use gk20a_enable_tsg() and gk20a_disable_tsg() as platform specific operations Bug 1739362 Change-Id: I2dd0f38c8303757e8c7a47d8da0e30a790e514f0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560635 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Unify remove/shutdown codepathsDavid Nieto2017-09-15
| | | | | | | | | | | | | | | | | | | | | The following changes are part of the porting of the bind/unbind functionality. These changes reuse the shutdown codepaths in iGPU and dGPU and fix a locking issue with in gk20a_busy() where the usage count can lead to a deadlock during the driver shutdown. It fixes a racing condition with the gr/mm code by invalidating the sw ready flag while holding the busy lock JIRA: EVLR-1739 Change-Id: I62ce47378436b21f447f4cd93388759ed3f9bad1 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1554959 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: page align DMA allocsAlex Waterman2017-09-12
| | | | | | | | | | | | | | | | | | | | | | Explicitly page align DMA memory allocations. Non-page aligned DMA allocs can lead to nvgpu_mem structs that have a size that's not page aligned. Those allocs, in some cases, can cause vGPU maps to return an error. More generally DMA allocs in Linux are never going to not be page aligned both in size and address. So might as well page align the alloc size to make code else where in the driver more simple. To imlpement this an aligned_size field has been added to struct nvgpu_mem. This field has the real page aligned size of the allocation. The original size is still saved in size. Change-Id: Ie08cfc4f39d5f97db84a54b8e314ad1fa53b72be Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547902 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move XVE debugfs code to Linux moduleTerje Bergstrom2017-09-12
| | | | | | | | | | | | Move XVE debugfs initialization code to live under common/linux. JIRA NVGPU-62 Change-Id: Ic6677511d249bc0a2455dde01db5b230afc70bb1 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1535133 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: disable power features for dgpu PG503Mahantesh Kumbar2017-09-12
| | | | | | | | | | | | | | JIRA NVGPUGV100-7 Change-Id: I17c72d88c8f44702289601dbfa3d8e434cdf351b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549262 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Validate buffer_offset argumentAlex Waterman2017-09-11
| | | | | | | | | | | | | | | | Validate the mapping_size argument in the VM mapping IOCTL before attempting to use the argument for anything. Bug 1954931 Change-Id: I81b22dc566c6c6f89e5e62604ce996376b33a343 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547046 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move CDE code to Linux moduleTerje Bergstrom2017-09-11
| | | | | | | | | | | | CDE is only used in Linux platforms, and the code is highly dependent on Linux APIs. Move the common CDE code to Linux module and leave only the chip specific parts to HAL. Change-Id: I507fe7eceaf7607303dfdddcf438449a5f582ea7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1554755 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hold ch ref when getting ch from fdKonsta Holtta2017-09-11
| | | | | | | | | | | | | | | | | Fix a race condition in gk20a_get_channel_from_file() that returns a channel pointer from an fd: take a reference to the channel before putting the file ref back. Now the caller is responsible of releasing the channel reference eventually. Also document why dbg_session_channel_data has to hold a ref to the channel file instead of just the channel: that might deadlock if the fds were closed in "wrong" order. Change-Id: I8e91b809f5f7b1cb0c1487bd955ad6d643727a53 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549290 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cast int to u8 explicitlyAlex Waterman2017-09-07
| | | | | | | | | | | | | | | | In the Linux VM code the kind attribute is assigned to a u8 from an int. The code is safe due to the checks in place, however, coverity does not like this so explicitly cast to u8 before assigning kind to kind_v. Coverity ID: 2567919 Bug 200291879 Change-Id: I3860faa99da8ac1a24bd0c3d77c7fbc72207614a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1548712 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove mapping of comptags to user VATerje Bergstrom2017-09-07
| | | | | | | | | | | | | | Remove the kernel feature to map compbit backing store areas to GPU VA. The feature is not used, and the relevant user space code is getting removed, too. Change-Id: I94f8bb9145da872694fdc5d3eb3c1365b2f47945 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547898 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: vgpu: Use vgpu_pm functions in suspend and resumeJinyoung Park2017-09-07
| | | | | | | | | | | | | | | Use vgpu_pm functions for vgpu instead of gk20a_pm functions in suspend and resume. Change-Id: I9d23cd612caa3e6fa9be65d60ccdba3b7f893350 Signed-off-by: Jinyoung Park <jinyoungp@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544161 Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add pd_max_batches sysfs node for gp10bSandeep Shinde2017-09-07
| | | | | | | | | | | | | | | | | | | Add a new sysfs node pd_max_batches for setting max batches value in NV_PGRAPH_PRI_PD_AB_DIST_CONFIG_1_MAX_BATCHES register which controls max number of batches per alpha-beta transition stored in PD. Bug 1927124 Change-Id: I2817f2d70dab348d8b0b8ba19bf1e9b9d23ca907 Signed-off-by: Sandeep Shinde <sashinde@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544104 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> (cherry picked from commit aa4daddda23aa44a84464200f497eac802a8e6ce) Reviewed-on: https://git-master.nvidia.com/r/1543355 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: cleanup allocator debuggingAlex Waterman2017-08-28
| | | | | | | | | | | | | | | | | Remove debugging features that did not really get used and make the debugging code use the nvgpu_log() functionality. This ties the allocator debugging into the larger nvgpu debug framework. Also modify many of the places CONFIG_DEBUG_FS was used to conditionally compile allocator debug code to use __KERNEL__ instead. This is because that debug code can still be called even when debugfs is not present in Linux. Change-Id: I112ebe1cae22d6f8db96d023993498093e18d74a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544439 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove unused tracing from allocatorsAlex Waterman2017-08-28
| | | | | | | | | | | | | | | Remove an unused tracing feature from the allocator code. This should make multi-OS support slightly easier. Change-Id: I8b57f037b66227f2110e457bff7aa6b443d89e82 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544438 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: debugfs code to dump HAL functionsSunny He2017-08-24
| | | | | | | | | | | | | | | | | | | | | Prints addresses of device-specific HAL functions to debugfs file hal/gops. The list of functions is produced by dumping the contents of the gpu_ops substruct of the gk20a struct. This interface makes the assumption that there are only function pointers in gpu_ops. Companion Python script nvgpu_debug_hal.py analyzes gk20a.h to determine operation counts and prettyify debugfs interface's output. Jira NVGPU-107 Change-Id: I0910e86638d144979e8630bbc5b330bccfd3ad94 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542990 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Replace kref for refcounting in nvgpuDebarshi Dutta2017-08-24
| | | | | | | | | | | | | | | | - added wrapper struct nvgpu_ref over nvgpu_atomic_t - added nvgpu_ref_* APIs to access the above struct JIRA NVGPU-140 Change-Id: Id47f897995dd4721751f7610b6d4d4fbfe4d6b9a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540899 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove support for old kernel versionTerje Bergstrom2017-08-22
| | | | | | | | | | | Remove support for pre-4.4 kernels. This allows deleting the checks for kernel version, and usage of linux/version.h. Change-Id: I4d6cb30512ea164d27549f4f4d096e5931bb1379 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1543499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Copy vbios_min_version to struct gk20aTerje Bergstrom2017-08-22
| | | | | | | | | | | | Accessing vbios_min_version in gk20a_platform creates an extra dependency to Linux. Copy it to struct gk20a at driver initialization. Change-Id: I9ff5dbeb1fecc6dc44a62f7affc24fd52c2bab26 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542837 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use nvgpu flags for run_preosTerje Bergstrom2017-08-22
| | | | | | | | | | | | | | Accessing run_preos from gk20a_platform causes unnecessary Linux dependency, so copy the flag to abstract flags. Change-Id: I4818fb6735201f36e552c1ff45138a44a3d94db1 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542836 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sourab Gupta <sourabg@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Fix potential map failing for vGPUAlex Waterman2017-08-17
| | | | | | | | | | | | | | Ensure that the mapping size passed to the vGPU mapping code is page aligned. The vGPU mapping code returns -EINVAL otherwise. Change-Id: I87a90085882fa0ff538b181a55240468392c4135 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540423 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add wrapper over atomic_t and atomic64_tDebarshi Dutta2017-08-17
| | | | | | | | | | | | | | | | | - added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over atomic_t and atomic64_t - added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above wrappers. JIRA NVGPU-121 Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533044 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Reorg mm HAL initializationSunny He2017-08-14
| | | | | | | | | | | | | | | | | | | Reorganize HAL initialization to remove inheritance and construct the gpu_ops struct at compile time. This patch only covers the mm sub-module of the gpu_ops struct. Perform HAL function assignments in hal_gxxxx.c through the population of a chip-specific copy of gpu_ops. Jira NVGPU-74 Change-Id: Ieb87a62f047510e51c52e6563d8e3fd5a65b5f28 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1537753 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add struct gk20a ptr to FUSE APIsAlex Waterman2017-08-14
| | | | | | | | | | | | | | | | | | | Add a pointer to struct gk20a to the FUSE APIs. This helps QNX builds avoid any static data definitions. Also this change plumbs struct gk20a in some of the Linux clk code and fixes a few minor style nits. Change-Id: I27dfb2c4e9a352f784d6cead150460d8e9e808d3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1537611 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: Reorg mm HAL initialization"Sunny He2017-08-11
| | | | | | | | | | | | Conflicts with gv100 changes This reverts commit 8d63cd3995d4a650b478ad69d7e29ed2b1b2d927. Change-Id: Ie2f88d281b2b87a9a794d79164a61c4d883626b7 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1537668 Reviewed-by: Shu Zhong <shuz@nvidia.com> Tested-by: Shu Zhong <shuz@nvidia.com>
* gpu: nvgpu: Reorg mm HAL initializationSunny He2017-08-11
| | | | | | | | | | | | | | | | | Reorganize HAL initialization to remove inheritance and construct the gpu_ops struct at compile time. This patch only covers the mm sub-module of the gpu_ops struct. Perform HAL function assignments in hal_gxxxx.c through the population of a chip-specific copy of gpu_ops. Jira NVGPU-74 Change-Id: I289284e6e528fc7951c959c8765ccf9349eec33b Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533351 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PG503 supportDavid Nieto2017-08-11
| | | | | | | | | | | | Adds basic PG503 support allowing devinit to complete. JIRA: EVLR-1693 Change-Id: Ice8a9ba18c8bba11f6bc174ba2c2d8802a738706 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1532746 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove railgate lock from gm20b_tegra_postscaleDeepak Nibade2017-08-11
| | | | | | | | | | | | | | | | | | | | | | | | In gm20b_tegra_postscale(), we use platform->railgate_lock to check if GPU is railgated or not But platform->railgate_lock was introduced only to prevent unrailgating in midst of gk20a_do_idle() sequence This lock is not the right way to check railgate status since it is still possible to railgate GPU with this lock being held Hence remove acquire/release of platform->railgate_lock from gm20b_tegra_postscale() Bug 1962265 Change-Id: I6208063de3fa77ed71e8fb0c011367fb66151193 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1536573 (cherry picked from commit 68bce66be338e48f4921f645b10b3fa5994fe1d4) Reviewed-on: https://git-master.nvidia.com/r/1537297 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: fix debugfs to disable big pagesThomas Fleury2017-08-07
| | | | | | | | | | | | | | | | | | | | | After setting 'Y' in disable_bigpage, in native SMMU case, we could still see 64K GMMU pages beeing used. Fixed the following: - enforce disable_bigpage in nvgpu_vm_map - update GPU characteristics so that new clients know whether or not big pages are enabled. For instance this may affect how CUDA requests memory mapping. JIRA EVLR-1694 Change-Id: I62841096add3bd798c5c11090054f82c8a2be832 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1532429 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>