summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Add for_each construct for nvgpu_sgtsAlex Waterman2017-10-04
| | | | | | | | | | | | | Add a macro to iterate across nvgpu_sgts. This makes it easier on developers who may accidentally forget to move to the next SGL. JIRA NVGPU-243 Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566627 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: skip clk gating prog for sim/emu.Deepak Goyal2017-10-04
| | | | | | | | | | | | | | | For Simualtion/Emulation platforms,clock gating should be skipped as it is not supported. Added new flags "can_"X"lcg" to check platform capability before doing SLCG,BLCG and ELCG. Bug 200314250 Change-Id: I4124d444a77a4c06df8c1d82c6038bfd457f3db0 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566049 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add common vaddr translate functionAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Add a function to do address translation for IOMMU capable GPUs. When an iGPU is behind and IOMMU it can pick whether to use that IOMMU for translation by adding a bit to physical addresses. This function takes care of that. However, this required an abstracted nvgpu_iommuable() API to check whether a GPU is behind an IOMMU. This patch adds that API for Linux. JIRA NVGPU-68 Change-Id: I489d14475167c019c294407372395df78c8b5feb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542965 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove sg_phys() from GMMU codeAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | Remove the last sg_phys() call from the GMMU code and replace it with a generic nvgpu_mem API. This new API, nvgpu_mem_get_phys_addr(), returns the physical address of an nvgpu_mem struct. Also, implement this new API in the Linux specific nvgpu_mem code since it requires access to the underlying SGT/SGL. JIRA NVGPU-68 Change-Id: Idf88701a2a8515464c658c26e0de493c82ff850d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542964 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove GR falcons bootstrap support using VAMahantesh Kumbar2017-10-03
| | | | | | | | | | | | | | | | | | | | - GR falcons bootstrap can be done using physical or virtual address by setting flag usevamask in PMU interface PMU_ACR_CMD_ID_BOOTSTRAP_MULTIPLE_FALCONS command - With this change always setting to physical address support & removed virtual address support along with code removal. - Removed Linux specific code used to get info regarding WPR VA. JIRA NVGPU-128 Change-Id: Id58f3ddc4418d61126f2a4eacb50713d278c10a0 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572468 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: gp106: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: I4221521e00442b044ff70007b7971f44cc3c4f67 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567986 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: gp10b: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: I33d46bb103d083316266eb1d325ca9f1525bf047 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567985 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: gm20b: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: Ic50345252c6d7ccb7e9059120b6cc751cdc28362 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567984 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: gk20a: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: I6f23cb6be4000300388bf17a04103d01571fc250 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567983 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Re-generate hardware headersTerje Bergstrom2017-09-26
| | | | | | | | | | | | JIRA NVGPU-218 Change-Id: Ib00a921150612d59454d0ed76233e7e39a63d6ce Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563850 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add forward declaration of nvgpu_sgtSourab Gupta2017-09-26
| | | | | | | | | | | | | | The patch adds a forward declaration of nvgpu_sgt in nvgpu_mem.h header file to be referred inside struct nvgpu_sgt_ops. This is to prevent compilation issue while including this header elsewhere without explicit forward declaration. Change-Id: I5cfe5c723e961813425be5301d9bb770bb10c896 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567713 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use u64 instead of dma_addr_tSourab Gupta2017-09-26
| | | | | | | | | | | | | The patch modifies a common dma api (nvgpu_dma_alloc_flags_vid_at) to use u64 argument (which is OS agnostic) instead of dma_addr_t as the argument type which is Linux specific. Change-Id: I74a694b08364b4d9e2826ffaf4620b113604d1cf Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: include rmos headers in common headersSourab Gupta2017-09-26
| | | | | | | | | | | The patch includes rmos atomic.h and barrier.h headers in nvgpu common header files Change-Id: Ia20c9694de0753a2397ab75c98e661f3b155965a Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567697 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU debug reorgMahantesh Kumbar2017-09-25
| | | | | | | | | | | | | | | | | | | | | - Moved PMU debug related code to pmu_debug.c Print pmu trace buffer Moved PMU controller/engine status dump debug code Moved ELPG stats dump code - Removed PMU falcon controller status dump code & used nvgpu_flcn_dump_stats() method, - Method to print ELPG stats. - PMU HAL to print PMU engine & ELPG debug info upon error NVGPU JIRA-96 Change-Id: Iaa3d983f1d3b78a1b051beb6c109d3da8f8c90bc Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1516640 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Falcon IMEM/DMEM dump supportMahantesh Kumbar2017-09-25
| | | | | | | | | | | | | | | | | | | - Added falcon interface/HAL for IMEM-copy-from to read data from IMEM from given location with requested size -Added falcon interface to print data of IMEM/DMEM from given location with requested size using falcon HAL. JIRA NVGPU-105 Change-Id: I84cf7b5769b84a2baee2c7e65027539598ec1295 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1514536 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: falcon status dump supportMahantesh Kumbar2017-09-25
| | | | | | | | | | | | | | | | | | | - Added support to dump flacon controller status - Method to print recent PC history to know call trace - Method to dump IMBLK info - Updated falcon hw header files to include registers of PC trace & IMBLK JIRA NVGPU-105 Change-Id: Id4aaafd87113d47e552afb21b87f8b087d36004e Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1515371 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Change HW header licenses to MITTerje Bergstrom2017-09-22
| | | | | | | | | | | | Change the license of all HW headers to MIT license. JIRA NVGPU-218 Change-Id: I49a22502159384575ac9e0f901f10c4bdffbbf96 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563849 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allow bind to be interruptedDavid Nieto2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change solves two problems: (*) the possibility of a crash due to interrupting the gpu initialization following a bind (*) a IOVA memory leak that could prevent the GPU from binding after about 200 bind/unbind cycles A detailed list of fixes: - chek that arbiter is initialized before freeing it. - do not re-enable interrupts when MSI is enabled on unbind. - free the semaphore sea on unbind. - ensure we dont double load the vbios. - check return value of nvgpu_mutex_init for semaphores. - add corresponding nvgpu_mutex_destroy calls. bug 1816516 Change-Id: Ia8af73019e0e1183998855d55bb3eea09672a8b7 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1465302 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: David Jarrett <djarrett@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563019 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change VBIOS code to use gp106 headersTerje Bergstrom2017-09-20
| | | | | | | | | | | | | | VBIOS code was the last code using gm206 hardware headers. Change the code to use gp106 headers instead, move the code to gp106 directory and delete gm206 HW headers. JIRA NVGPU-218 Change-Id: I7ccd6c2975c767bca871d77a701dbd3395b17f30 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563742 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change license of gp10b headersTerje Bergstrom2017-09-20
| | | | | | | | | | | | | | Change license of gp10b headers to MIT. hw_chiplet_pwr_gp10b.h is not generated by the tool, and it's not used, so remove that header. JIRA NVGPU-218 Change-Id: I5781838f6a73ae363cb86b7db9c47225dd6d5b97 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1563655 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix kmem debuggingAlex Waterman2017-09-20
| | | | | | | | | | | | | | | | | | | | | | | | | Make the kmem debugging prints much more easily usable. Previously the prints would only take effect if the full tracking was enabled: CONFIG_NVGPU_TRACK_MEM_USAGE However, there are many times when just the debug prints would be nice to have by simply setting the log mask bit in the log mask. echo 0x80000 > /sys/kernel/debug/<gpu>/log_mask Also this change now uses the real nvgpu_log() function instead of the legacy nvgpu_dbg() function. This makes the logging appear with proper GPU printing as well. Change-Id: If545da3d357d38fe8252e7d548c6765b995cd3d7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560248 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add DMA loggingAlex Waterman2017-09-20
| | | | | | | | | | | | | | Add logging prints for the DMA interface in nvgpu. These prints show size, aligned size, type of alloc (sysmem vs vidmem), and flags for the alloc. Bug 1956137 Change-Id: I3e15152959dbb256cb1679435a18ab6821f4cde3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1559376 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add nvgpu_mem_is_valid() callAlex Waterman2017-09-19
| | | | | | | | | | | | | | | | | | | | Add a function to check if an nvgpu_mem is allocated (valid) or not. Also fix possibly leaked state in nvgpu_mems when they fail to allocate. Also ensure that in the case of a failure to allocate no state is accidentally leaked to the caller. This should hopefully make it less likely that a caller thinks a buffer that failed to allocate is actually allocated. Change-Id: I43224ece7da84e63b2f43f36f04941126fabf3c7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1559419 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix channel unbind sequence from TSGDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now remove a channel from TSG list and disable all the channels in TSG while removing a channel from TSG With this sequence if any one channel in TSG is closed, rest of the channels are set as timed out and cannot be used anymore We need to fix this sequence as below to allow removing a channel from active TSG so that rest of the channels can still be used - disable all channels of TSG - preempt TSG - check if CTX_RELOAD is set if support is available if CTX_RELOAD is set on channel, it should be moved to some other channel - check if FAULTED is set if support is available - if NEXT is set on channel then it means channel is still active print out an error in this case for the time being until properly handled - remove the channel from runlist - remove channel from TSG list - re-enable rest of the channels in TSG - clean up the channel (same as regular channels) Add below fifo operations to support checking channel status g->ops.fifo.tsg_verify_status_ctx_reload g->ops.fifo.tsg_verify_status_faulted Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106 as gm20b_fifo_tsg_verify_status_ctx_reload() This API will check if channel to be released has CTX_RELOAD set, if yes CTX_RELOAD needs to be moved to some other channel in TSG Remove static from channel_gk20a_update_runlist() and export it Bug 200327095 Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560637 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix TSG enable sequenceDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | Due to a h/w bug in Maxwell and Pascal we first need to enable all channels with NEXT and CTX_RELOAD set in a TSG, and then rest of the channels should be enabled Add this sequence to gk20a_tsg_enable() Add new APIs to enable/disable scheduling of TSG runlist gk20a_fifo_enable_tsg_sched() gk20a_fifo_disble_tsg_sched() Add new APIs to check if channel has NEXT or CTX_RELOAD set gk20a_fifo_channel_status_is_next() gk20a_fifo_channel_status_is_ctx_reload() Bug 1739362 Change-Id: I4891cbd7f22ebc1e0bf32c52801002cdc259dbe1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560636 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu : nvgpu: PG503 PMU ucode supportMahantesh Kumbar2017-09-12
| | | | | | | | | | | | | | | | | - Added PMU app version - Added method to init queue - P4 CL# 22754073 JIRA NVGPUGV100-7 Change-Id: I095ee5d0ad59693ee7d9eb3035f85f63f1b033d3 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549418 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: page align DMA allocsAlex Waterman2017-09-12
| | | | | | | | | | | | | | | | | | | | | | Explicitly page align DMA memory allocations. Non-page aligned DMA allocs can lead to nvgpu_mem structs that have a size that's not page aligned. Those allocs, in some cases, can cause vGPU maps to return an error. More generally DMA allocs in Linux are never going to not be page aligned both in size and address. So might as well page align the alloc size to make code else where in the driver more simple. To imlpement this an aligned_size field has been added to struct nvgpu_mem. This field has the real page aligned size of the allocation. The original size is still saved in size. Change-Id: Ie08cfc4f39d5f97db84a54b8e314ad1fa53b72be Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547902 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move XVE debugfs code to Linux moduleTerje Bergstrom2017-09-12
| | | | | | | | | | | | Move XVE debugfs initialization code to live under common/linux. JIRA NVGPU-62 Change-Id: Ic6677511d249bc0a2455dde01db5b230afc70bb1 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1535133 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove mapping of comptags to user VATerje Bergstrom2017-09-07
| | | | | | | | | | | | | | Remove the kernel feature to map compbit backing store areas to GPU VA. The feature is not used, and the relevant user space code is getting removed, too. Change-Id: I94f8bb9145da872694fdc5d3eb3c1365b2f47945 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547898 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Add pd_max_batches sysfs node for gp10bSandeep Shinde2017-09-07
| | | | | | | | | | | | | | | | | | | Add a new sysfs node pd_max_batches for setting max batches value in NV_PGRAPH_PRI_PD_AB_DIST_CONFIG_1_MAX_BATCHES register which controls max number of batches per alpha-beta transition stored in PD. Bug 1927124 Change-Id: I2817f2d70dab348d8b0b8ba19bf1e9b9d23ca907 Signed-off-by: Sandeep Shinde <sashinde@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544104 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> (cherry picked from commit aa4daddda23aa44a84464200f497eac802a8e6ce) Reviewed-on: https://git-master.nvidia.com/r/1543355 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: NVGPU abstraction for ACCESS_ONCEDebarshi Dutta2017-09-03
| | | | | | | | | | | | | | | | Construct a wrapper macro NV_ACCESS_ONCE(x) which uses OS specific versions of ACCESS_ONCE. e.g for linux, ACCESS_ONCE(x) is used. Jira NVGPU-125 Change-Id: Ia5c67baae111c1a7978c530bf279715fc808287d Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549928 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: protect header inclusion in __kernel__Debarshi Dutta2017-09-03
| | | | | | | | | | | | The header <nvgpu/linux/atomic.h> in file include/nvgpu/atomic need to be included inside __kernel__ Change-Id: Iac65ec570cdf6ba3bdf0149e4648f9f64cd23b00 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547537 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: include rmos headers in common headersSourab Gupta2017-08-31
| | | | | | | | | | | | | | | | | | | Currently nvgpu common header files include the Linux specific headers protected by __KERNEL__ check. The patch includes the rmos specific headers when the common headers (and common nvgpu files, by extension) are compiled under QNX, so that QNX specific data structures / type definitions can be retrieved. Change-Id: Icb03fdc90f6ef4cff8666a6fd0f0f06b5ef62c97 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547658 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cleanup allocator debuggingAlex Waterman2017-08-28
| | | | | | | | | | | | | | | | | Remove debugging features that did not really get used and make the debugging code use the nvgpu_log() functionality. This ties the allocator debugging into the larger nvgpu debug framework. Also modify many of the places CONFIG_DEBUG_FS was used to conditionally compile allocator debug code to use __KERNEL__ instead. This is because that debug code can still be called even when debugfs is not present in Linux. Change-Id: I112ebe1cae22d6f8db96d023993498093e18d74a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544439 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove unused tracing from allocatorsAlex Waterman2017-08-28
| | | | | | | | | | | | | | | Remove an unused tracing feature from the allocator code. This should make multi-OS support slightly easier. Change-Id: I8b57f037b66227f2110e457bff7aa6b443d89e82 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544438 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Replace kref for refcounting in nvgpuDebarshi Dutta2017-08-24
| | | | | | | | | | | | | | | | - added wrapper struct nvgpu_ref over nvgpu_atomic_t - added nvgpu_ref_* APIs to access the above struct JIRA NVGPU-140 Change-Id: Id47f897995dd4721751f7610b6d4d4fbfe4d6b9a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540899 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Use nvgpu flags for run_preosTerje Bergstrom2017-08-22
| | | | | | | | | | | | | | Accessing run_preos from gk20a_platform causes unnecessary Linux dependency, so copy the flag to abstract flags. Change-Id: I4818fb6735201f36e552c1ff45138a44a3d94db1 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542836 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sourab Gupta <sourabg@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Nvgpu abstraction for linux barriers.Debarshi Dutta2017-08-22
| | | | | | | | | | | | | | | | | construct wrapper nvgpu_* methods to replace mb,rmb,wmb,smp_mb,smp_rmb,smp_wmb,read_barrier_depends and smp_read_barrier_depends. NVGPU-122 Change-Id: I8d24dd70fef5cb0fadaacc15f3ab11531667a0df Signed-off-by: Debarshi <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541199 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Move non-fp pmu members from gpu_opsSunny He2017-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move non-function pointer members out of the pmu and pmu_ver substructs of gpu_ops. Ideally gpu_ops will have only function ponters, better matching its intended purpose and improving readability. - g.ops.pmu_ver.cmd_id_zbc_table_update has been changed to g.pmu_ver_cmd_id_zbc_table_update - g.ops.pmu.lspmuwprinitdone has been changed to g.pmu_lsf_pmu_wpr_init_done - g.ops.pmu.lsfloadedfalconid has been changed to g.pmu_lsf_loaded_falcon_id Boolean flags have been implemented using the enabled.h API - g.ops.pmu_ver.is_pmu_zbc_save_supported moved to common flag NVGPU_PMU_ZBC_SAVE - g.ops.pmu.fecsbootstrapdone moved to common flag NVGPU_PMU_FECS_BOOTSTRAP_DONE Jira NVGPU-74 Change-Id: I08fb20f8f382277f2c579f06d561914c000ea6e0 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530981 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: protect header inclusion in __kernel__Debarshi Dutta2017-08-21
| | | | | | | | | | | | | | The header <linux/atomic.h> needs to be included inside __kernel__ JIRA-121 Change-Id: I978e7c12c6fff7d7ccea300d0be5784d45e559ed Signed-off-by: Debarshi <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541220 Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add wrapper over atomic_t and atomic64_tDebarshi Dutta2017-08-17
| | | | | | | | | | | | | | | | | - added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over atomic_t and atomic64_t - added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above wrappers. JIRA NVGPU-121 Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533044 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add su_rd_coalesce register fieldAlex Waterman2017-08-16
| | | | | | | | | | | | | | | Add the surface rd coalesce field in the register that controls read coalescing. Bug 200314091 Change-Id: I185ad7e6ef64ecae9369e26d22a7381611ddc693 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1518305 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add struct gk20a ptr to FUSE APIsAlex Waterman2017-08-14
| | | | | | | | | | | | | | | | | | | Add a pointer to struct gk20a to the FUSE APIs. This helps QNX builds avoid any static data definitions. Also this change plumbs struct gk20a in some of the Linux clk code and fixes a few minor style nits. Change-Id: I27dfb2c4e9a352f784d6cead150460d8e9e808d3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1537611 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove mm.get_iova_addrAlex Waterman2017-08-04
| | | | | | | | | | | | | | | | | | | | | | Remove the mm.get_iova_addr() HAL and replace it with a new HAL called mm.gpu_phys_addr(). This new HAL provides the real phys address that should be passed to the GPU from a physical address obtained from a scatter list. It also provides a mechanism by which the HAL code can add extra bits to a GPU physical address based on the attributes passed in. This is necessary during GMMU page table programming. Also remove the flags argument from the various address functions. This flag was used for adding an IO coherence bit to the GPU physical address which is not supported. JIRA NVGPU-30 Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530864 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make LTC disabling common codeTerje Bergstrom2017-08-04
| | | | | | | | | | | | | | | | | | Refactor the sync_debugfs LTC HAL op so that the logic to enable or disable LTC goes to common code nvgpu_ltc_sync_enabled() and the LTC HAL set_enabled only performs the hardware register access. Create a new common function nvgpu_init_ltc_support() to initialize the LTC software variable, and move hardware initialization of LTC to be called from it. JIRA NVGPU-62 Change-Id: Ib1cf4f5b83ca3dac08407464ed56a732e0a33923 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1528262 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reorg gr_ctx HAL initializationSunny He2017-07-27
| | | | | | | | | | | | | | | | | Reorganize HAL initialization to remove inheritance and construct the gpu_ops struct at compile time. This patch only covers the gr_ctx sub-module of the gpu_ops struct. Perform HAL function assignments in hal_gxxxx.c through the population of a chip-specific copy of gpu_ops. Jira NVGPU-74 Change-Id: I783d8e8919d8694ad2aa0d285e4c5a2b62580f48 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1527417 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>