summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include/nvgpu
Commit message (Collapse)AuthorAge
* gpu-paging: Support asynchronous pagingJoshua Bakita2022-05-31
| | | | | | | | | | | | | | | | | | | | | | - Fully enables *_ASYNC API - Allows page mapping to be overlapped with I/O, resulting in an 11% speedup to synchronous reads Benchmarks, 1,000 iters, before: gpu_paging_speed, write: 185.5ms +/- 3.58 gpu_paging_speed, read: 180.5ms +/- 1.42 gpu_paging_overhead_speed, write start: 183.3ms +/- 3.89 gpu_paging_overhead_speed, write finish: 3.4ms +/- 2.61 gpu_paging_overhead_speed, read start: 181.6ms +/- 3.34 gpu_paging_overhead_speed, read finish: 41.1ms +/- 2.69 Benchmarks, 1,000 iters, after: gpu_paging_speed, write: 185.8ms +/- 3.70 gpu_paging_speed, read: 161.3ms +/- 0.97 gpu_paging_overhead_speed, write start: 38.9ms +/- 5.47 gpu_paging_overhead_speed, write finish: 3.1ms +/- 2.42 gpu_paging_overhead_speed, read start: 79.4 +/- 6.42 gpu_paging_overhead_speed, read finish: 44.3 +/- 1.53
* gpu-paging: Initial working implementationJoshua Bakita2022-05-24
| | | | | | | | Supports synchronous page out or in of a specific buffer. Includes fast reverse struct mapped_buf lookup. Requires initial set of changes to nvmap as well.
* gpu: nvgpu: add support for disabling l3 via DTDebarshi Dutta2022-02-02
| | | | | | | | | | | | | | | | | | | | On volta the GPU determines whether to do L3 allocation for a mapping by checking bit 36 of the physical address. So if a mapping should allocate lines in the L3 this bit must be set. However, when the physical addresses for 64GB of RAM uses the 36th bit resulting in a conflict. Thus, add support for disabling l3 support for SKUs having 64GB of physical memory. Bug 3486025 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Change-Id: Ic540e754274cf1d9e6625493962699d21509e540 Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2661548 Reviewed-by: Brad Griffis <bgriffis@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Tested-by: Brad Griffis <bgriffis@nvidia.com> GVS: Gerrit_Virtual_Submit
* nvgpu: gpu: adds support for ACR dbg/prod.smadhavan2021-10-11
| | | | | | | | | | | | | | | | | | | | | | ACR ucode is encrypted using different keys for prod/dbg boards. This change adds a check to select ACR ucode based on board type. ACR ucode binaries are also renamed with "nv_" prefix to conform to release naming conventions. Bug 2672836 Change-Id: I48818f018f903c0d03642c12485d60e392121eb6 Signed-off-by: smadhavan <smadhavan@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2492587 (cherry picked from commit 5dacead521aaee1bd8a3b2e9db3e281c085038f7) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2597878 Reviewed-by: Mayur Poojary <mpoojary@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: Mayur Poojary <mpoojary@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* nvgpu: gpu: adds support for ACR dbg/prod.smadhavan2021-10-05
| | | | | | | | | | | | | | | | | | | | | | | ACR ucode is encrypted using different keys for prod/dbg boards. This change adds a check to select ACR ucode based on board type. Note: This support is added only for t19x. Bug 2350733 Bug 2672832 Bug 2672836 Bug 2674821 JIRA NVGPU-4001 (cherry picked from commit c19a0f0c26ab94f6bbf4380ab93e458b88589c82) Change-Id: I2febc2cbe869c06bca0adebd7723b0d6fc1d4b23 Signed-off-by: smadhavan <smadhavan@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2483968 Tested-by: Amulya Yarlagadda <ayarlagadda@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: Amulya Yarlagadda <ayarlagadda@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add ECC Support for GV11B in LinuxDebarshi Dutta2021-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement nvgpu plumbing to allow reporting ECC errors(corrected and uncorrected) to a L1SS service(if one exists). This patch includes the following 1) Added code that submits ECC error reports via the Interrupt context directly to a L1SS service in linux OS. 2) Added support for enabling/disabling the error reports via L1SS's registration/deregistration API. Nvgpu simply invokes an empty function until the registration is successful. 3) Added Spinlock to correctly handle concurrency for accessing the correct Ops for submitting requests. 4) Adds error reporting for a subset of interrupts that can be verified via external ECC injection logic. A subsequent patch will add the API for rest of the interrupts. 5) In case of critical(uncorrected errors), change nvgpu's state to quiesce state. Jira L4T-1187 Bug 200700400 Change-Id: Id31f70531fba355e94e72c4f9762593e7667a11c Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2530411 Tested-by: Bibek Basu <bbasu@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: wait for stalling interrupts to complete during TSG unbind preemptSagar Kamble2021-05-04
| | | | | | | | | | | | | | | | | | | | | Some of the engine stalling interrupts can block the context save off the engine if not handled during fifo.preempt_tsg. They need to be handled while polling for engine ctxsw status. Bug 200711183 Bug 200726848 Change-Id: Ie45d76d9d1d8be3ffb842670843507f2d9aea6d0 Signed-off-by: Sagar Kamble <skamble@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2521971 (cherry picked from commit I7418a9e0354013b81fbefd8c0cab5068404fc44e) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2523938 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: create timed wait functions for stall and nonstall interrupts ↵Sagar Kamble2021-05-04
| | | | | | | | | | | | | | | | | | | | | | | | completion In order to process stalling interrupts during TSG unbind, we need a API to wait for the stalling interrupts to complete within certain duration. Prepare these APIs for stalling and non-stalling interrupts. Bug 200711183 Bug 200726848 Change-Id: I634738249ade64224326b356d6244ad4299f1baf Signed-off-by: Sagar Kamble <skamble@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2521970 (cherry picked from commit I0b7a64c0f3761bbd0ca0843aea28a591ed23739f) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2523937 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: retry tsg unbind if NEXT is setSagar Kamble2021-03-19
| | | | | | | | | | | | | | | | | | | | | | | The NEXT bit can remain set for the channel if timeslice expires before scheduler clears it. Due to this nvgpu fails TSG unbind and in turn nvrm_gpu fails channel close. In this case, checking the channel hw state after some time can help see NEXT bit cleared by scheduler. Reenable the tsg and return -EAGAIN to nvrm_gpu for it to retry again. Bug 3144960 Bug 200520811 Change-Id: I35f417f02270e371a4e632986b73a00f8a4f921a Signed-off-by: Sagar Kamble <skamble@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468391 (cherry picked from commit cf287a4ef592e7329f813c076ec8bdad18dc5933) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2479106 Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: add support for ACB SLCG on gv11bprsethi2020-10-27
| | | | | | | | | | | | | | | | | | | | | | Register list for ACB SLCG is auto generated with scripts. Add HAL operations to enable/disable ACB clock gating. Cherry-pick/manually port from dev-main Bug 200647909 Change-Id: I4be4c14cc072fcccd91031a5a40321f5ff11f549 Signed-off-by: Prateek sethi <prsethi@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420355 (cherry picked from commit c7c04d3a28c2eb0edc8e015dd0130fa50d3496c7) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2434464 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com> Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-by: Phoenix Jung <pjung@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: limit PD cache to < pgsize for linuxPeter Daifuku2020-10-06
| | | | | | | | | | | | | | | | | | | | | | | | For Linux, limit the use of the cache to entries less than the page size, to avoid potential problems with running out of CMA memory when allocating large, contiguous slabs, as would be required for non-iommmuable chips. Also, in nvgpu_pd_cache_do_free(), zero out entries only if iommu is in use and PTE entries use the cache (since it's the prefetch of invalid PTEs by iommu that needs to be avoided). Bug 3093183 Bug 3100907 Change-Id: I363031db32e11bc705810a7e87fc9e9ac1dc00bd Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2422039 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Dinesh T <dt@nvidia.com> Reviewed-by: Satish Arora <satisha@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: add PD cache support for page-sized PTEsPeter Daifuku2020-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Large buffers being mapped to GMMU end up needing many pages for the PTE tables. Allocating these pages one by one can end up being a performance bottleneck, particularly in the virtualized case. Add support for page-sized PTEs to the existing PD cache: - define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab for the PD cache, effectively set to 64K bytes - Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE - When freeing up cached entries, avoid prefetch errors by invalidating the entry (memset to 0) Bug 3093183 Bug 3100907 Change-Id: I2302a1dfeb056b9461159121bbae1be70524a357 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2401783 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Satish Arora <satisha@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Discard coherency check on gmmuDebarshi Dutta2020-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | With MSS Nvlink set for force snoop, check for the coherency flag in gmmu attribute and setting pte aperture to coherent type based on that checking is not relevant. coherent variable removed from nvgpu_gmmu_attrs struct. Bug 200473147 Bug 3057980 Change-Id: Idf76cac901ef7c70faa2c4f7f11a046d94b9466a Signed-off-by: Vinod G <vinodg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2013212 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry-picked from 4e1769097526e5203f7c18a663ab3c29f5568ae5 in rel-32) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2387272 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: Aayush Rajoria <arajoria@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove channel cycle stats ioctlsThomas Fleury2020-07-27
| | | | | | | | | | | | | | | | | | | | | Cycle stats and cycle stats snapshot ioctls have been moved to debug node. Removing channel ioctls. Bug 2660206 Bug 220464613 Change-Id: I3aecdf4a8310eeb38de2de5ac076048891afe436 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2030992 (cherry picked from commit f20424ea6a7c6fcf977630e3e95d9e78418f13b8) Signed-off-by: Gagan Grover <ggrover@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2092020 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Phoenix Jung <pjung@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Peter Daifuku <pdaifuku@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wait ACK for FECS watchdog timeoutDeepak Nibade2020-07-14
| | | | | | | | | | | | | | | | | | | On Volta, nvgpu needs to wait for explicit ACK from CTXSW while setting FECS watchdog timeoout This is manual port of the fixes 4d7e5026e38528b88a4a168eca9a8b180475b368 and ad89436b03428a42e43042b6a849c15843fdebc4 on dev-main since clean cherry-pick is not possible due to huge file and structure differences. Bug 200603566 Change-Id: Icba69998ab45eee5fdf2a29e1ac1067589301be6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2371708 Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Updated with generator headersDavid Ung2020-05-22
| | | | | | | | | | | | | | | | | Add pmu_idle_mask_1, pmu_idle_mask_2 and pmu_idle_mask_2_supp Bug 2833620 Change-Id: Icceb99b48a227d32653fd9fbc3da9e27065e9fe2 Signed-off-by: David Ung <davidu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2334219 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Regenerated headersDavid Ung2020-05-22
| | | | | | | | | | | | | | | | | Regenerating header from register generator Bug 2833620 Change-Id: Idc8e922bb611ed5acae66b6ca38db4bb9c8a1904 Signed-off-by: David Ung <davidu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335263 Reviewed-by: automaticguardword <automaticguardword@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle ioctl l2_fb_ops betterddutta2020-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Background: There is a race that occurs when l2_fb_ops ioctl is invoked. The race occurs as part of the flush() call while a gk20_idle() is in progress. This patch handles the race by making changes in the l2_fb_ops ioctl itself. For cases where pm_runtime is disabled or railgate is disabled, we allow this ioctl call to always go ahead as power is assumed to be always on. For the other case, we first check the status of g->power_on. In the driver, g->power_on is set to true, once unrailgate is completed and is set to false just before calling railgate. For linux, the driver invokes gk20a_idle() but there is a delay after which the call to the rpm_suspend()'s callback gets triggered. This leads to a scenario where we cannot efficiently rely on the runtime_pm's APIs to allow us to block an imminent suspend or exit if the suspend is currently in progress. Previous attempts at solving this has lead to ineffective solutions and make it much complicated to maintain the code. With regards to the above, this patch attempts to simplify the way this can be solved. The patch calls gk20a_busy() when g->power_on = true. This prevents the race with gk20a_idle(). Based on the rpm_resume and rpm_suspend's upstream code, resume is prioritized over a suspend unless a suspend is already in progress i.e. the delay period has been served and the suspend invokes the callback. There is a very small window for this to happen and the ioctl can then power_up the device as evident from the gk20a_busy's calls. A new function gk20a_check_poweron() is added. This function protects the access to g->power_on via a mutex. By preventing a read from happening simulatenously as a write on g->power_on, the likelihood of an runtime_suspend triggering before a runtime_resume is further reduced. Bug 200507468 Change-Id: I5c02dfa8ea855732e59b759d167152cf45a1131f Signed-off-by:Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2299545 Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove blcg_enable/disableddutta2020-02-28
| | | | | | | | | | | | | | | | | | blcg is always enabled by default and there is no need for disabling this during gr init or gr reset. Bug 2866010 Change-Id: Iaf17b7fdf05ad04fe435e1a1fda758deedc6484c Signed-off-by: ddutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2303114 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Sagar Kamble <skamble@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move cg_enable after pmu_init is completeDebarshi Dutta2020-02-19
| | | | | | | | | | | | | | | | | | This patch help resolve the boot time failures happening with pmu_exterr for porg. cg_enable can race with pmu_init thread, cg_enable is moved post pmu init thread to avoid the above race. Bug 200565050 Change-Id: I2192053eff8767847ea012ca20b3607d2f6cd26f Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2239959 Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Sagar Kamble <skamble@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: vgpu: add mmu_debug_mode supportAparna Das2020-02-05
| | | | | | | | | | | | | | | | | | Added two new IVC commands that set gr and fb mmu debug mode. Bug 2586624 Change-Id: I358fb04713a9754fb209c0a90d02130dd4a1caf6 Reviewed-on: https://git-master.nvidia.com/r/2204980 (cherry picked from commit db4e5b09891aff075dfffb7cc2fe0630a71ab9a6) Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2288347 Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use refcnt for ch mmu_debug_modeThomas Fleury2020-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Replaced ch->mmu_debug_mode_enabled with ch->mmu_debug_mode_refcnt. If channel is enabled multiple times by userspace, then ref count is updated accordingly. There is an expectation that enable/disable calls are balanced for setting channel's mmu debug mode. When unbinding the channel, decrease refcnt for the channel until it reaches 0. Also, removed tsg parameter from nvgpu_tsg_set_mmu_debug_mode as it can be retrieved from ch. Bug 2515097 Bug 2713590 Change-Id: If334e374a55bd14ae219edbfd3b1fce5ff25c226 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2184702 (cherry picked from commit f422aee39387a5aa337de69cc21a67f16697ae0e) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208772 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Winnie Hsu <whsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: set FB/HSMMU debug modeThomas Fleury2020-01-30
| | | | | | | | | | | | | | | | | | | | | | | Set NV_PFB_HSMMU_PRI_MMU_DEBUG_CTRL and NV_PFB_PRI_MMU_DEBUG_CTRL in addition to NV_PGRAPH_PRI_GPCS_MMU_DEBUG_CTRL, in NVGPU_DBG_GPU_IOCTL_SET_CTX_MMU_DEBUG_MODE Bug 2515097 Bug 2713590 Change-Id: I1763b43e79fac3edb68a35980683d58bfa89519f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2115785 (cherry picked from commit 8057514a9f7fc5f175e2e0571dfa91d78ebb6410) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208771 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Winnie Hsu <whsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: accessors for FB/HSSMU MMU_DEBUG_CTRLThomas Fleury2020-01-30
| | | | | | | | | | | | | | | | | | | | Add accessors for NV_PFB_HSMMU_PRI_MMU_DEBUG_CTRL and NV_PFB_PRI_MMU_DEBUG_CTRL Bug 2515097 Bug 2713590 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Change-Id: Ieadee041854bc9a17721b5b17938a106e205517f Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208770 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add refcounting for MMU debug modeThomas Fleury2020-01-30
| | | | | | | | | | | | | | | | | | | | | | | GPC MMU debug mode should be set if at least one channel in the TSG has requested it. Add refcounting for MMU debug mode, to make sure debug mode is disabled only when no channel in the TSG is using it. Bug 2515097 Bug 2713590 Change-Id: Ic5530f93523a9ec2cd3bfebc97adf7b7000531e0 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2123017 (cherry picked from commit a1248d87fe6e20aab3e5f2e0764f9fe8d80d0552) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208769 Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Winnie Hsu <whsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: add SET_CTX_MMU_DEBUG_MODE ioctlThomas Fleury2020-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added NVGPU_DBG_GPU_IOCTL_SET_CTX_MMU_DEBUG_MODE ioctl to set MMU debug mode for a given context. Added gr.set_mmu_debug_mode HAL to change NV_PGPC_PRI_MMU_DEBUG_CTRL for a given channel. HAL implementation for native case is gm20b_gr_set_mmu_debug_mode. It internally uses regops, which directly writes to the register if the context is resident, or writes to gr context otherwise. Added NVGPU_SUPPORT_SET_CTX_MMU_DEBUG_MODE to enable the feature. NV_PGPC_PRI_MMU_DEBUG_CTRL has to be context switched in FECS ucode, so the feature is only enabled on TU104 for now. Bug 2515097 But 2713590 Change-Id: Ib4efaf06fc47a8539b4474f94c68c20ce225263f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2110720 (cherry-picked from commit af2ccb811d3de06f052b1dee39bd9ffa863ac8ce) Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208767 Reviewed-by: Kajetan Dutka <kdutka@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Winnie Hsu <whsu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Kajetan Dutka <kdutka@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add struct nvgpu_sched_ctrl to gk20aPeter Daifuku2020-01-22
| | | | | | | | | | | | | | | | | Add struct nvgpu_sched_ctrl to struct gk20a Delete struct gk20a_sched_ctrl from struct nvgpu_os_linux Update sched_ctrl functions to use the nvgpu_sched_ctrl struct Bug 200576520 Change-Id: I35b13219e5ef0a8a03333dfd7d46e1d308aec541 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2279152 Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Satish Arora <satisha@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: vgpu: add platform atomic supportsh2020-01-08
| | | | | | | | | | | | | | | | | | | Set platform atomic attribute flag. bug 200580236 Change-Id: I06fd0cf363886922ad5145837004d04e35383470 Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2016078 (cherry picked from commit ef5aac37d9a4531fecd1f1dae0581a2bd28f164d) Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2274970 GVS: Gerrit_Virtual_Submit Tested-by: Sreeniketh H <sh@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enable platform atomic featureVinod G2020-01-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support following changes related to platform atomic feature NV_PFB_PRI_MMU_CTRL_ATOMIC_CAPABILITY_MODE to RMW MODE NV_PFB_PRI_MMU_CTRL_ATOMIC_CAPABILITY_SYS_NCOH_MODE to L2 NV_PFB_HSHUB_NUM_ACTIVE_LTCS_HUB_SYS_ATOMIC_MODE to USE_RMW NV_PFB_FBHUB_NUM_ACTIVE_LTCS_HUB_SYS_ATOMIC_MODE to USE_RMW NV_PFB_FBHUB_NUM_ACTIVE_LTCS_HUB_SYS_NCOH_ATOMIC_MODE to USE_READ In gv11b, FBHUB_NUM_ACTIVE_LTCS register has read only privilege, so atomic mode register bits cannot be updated from kernel code. atomic capability and atomic_sys_ncoh_mode bits are copied from fb mmu_ctrl to gpcs_mmu_ctrl register. new tu104 hal for fb_enable_nvlink function. bug 200580236 Change-Id: Ia78986c1c56795c6efad20f4ba42700ef1c2c1ad Signed-off-by: Vinod G <vinodg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2013481 (cherry picked from commit 251e3eaa8029c4ae07b2cde7af5d9775e1cd8ec1) Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2274932 GVS: Gerrit_Virtual_Submit Tested-by: Sreeniketh H <sh@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add platform atomic supportVinod G2020-01-08
| | | | | | | | | | | | | | | | | | | | | | | | | Add new variable in nvgpu_as_map_buffer_ex_args for app to specify the platform atomic support for the page. When platform atomic attribute flag is set, pte memory aperture is set to be coherent type. renamed nvgpu_aperture_mask_coh -> nvgpu_aperture_mask_raw function. bug 200580236 Change-Id: I18266724dafdc8dfd96a0711f23cf08e23682afc Signed-off-by: Vinod G <vinodg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2012679 (cherry picked from commit 9e0a9004b71f92b7713fd3b30141b0d9d4cfa2c6) Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2274914 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Sreeniketh H <sh@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: re-enable elpg after golden img initPeter Daifuku2019-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Typically, the PMU init thread will finish up long before the golden context image has been initialized, which means that ELPG hasn't truly been enabled at that point. Create a new function, nvgpu_pmu_reenable_pg(), which checks if elpg had been enabled (non-zero refcnt), and if so, disables then re-enables it. Call this function from gk20a_alloc_obj_ctx() after the golden context image has been initialized to ensure that elpg is truly enabled. Manually ported from dev-main Bug 200543218 Change-Id: I0e7c4f64434c5e356829581950edce61cc88882a Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2245768 (cherry picked from commit 077b6712b5a40340ece818416002ac8431dc4138) Reviewed-on: https://git-master.nvidia.com/r/2250091 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: set CE prod valuesDeepak Nibade2019-11-04
| | | | | | | | | | | | | | | | | | | | | Add g->ops.ce.init_prod_values() hal for gv11b to initialize PROD values of CE unit Bug 2526212 Chery-pick/manual port from dev-main Change-Id: I8e516b292622e09c537feb7830392648116baa7c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2150874 (cherry picked from commit 0e6a305c6af3ea6d9a0cad7b4071f68028a1aebe) Reviewed-on: https://git-master.nvidia.com/r/2224709 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-by: Luis Dib <ldib@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add clock gating support for HSHUBDeepak Nibade2019-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add BLCG and SLCG clock gating support for HSHUB unit on gv11b Register list for BLCG and SLCG is auto generated with scripts. Add HAL operations to enable/disable HSHUB clock gating Re-generate gv11b reglist so that all the manually commented registers are automatically deleted. Some of the unicast registers are also deleted. We already have corresponding broadcast registers present. Cherry-pick/manually port from dev-main Bug 2526212 Change-Id: I2654f158daa802bcf992e103ed4a44675aa5fd4d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2150199 (cherry picked from commit e34b6f76d38ad5641c1ed7c3a4b36752d9dd4750) Reviewed-on: https://git-master.nvidia.com/r/2224708 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-by: Luis Dib <ldib@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix NVGPU_COND_WAIT_INTERRUPTIBLEThomas Fleury2019-10-11
| | | | | | | | | | | | | | | | | | | | | | | | When called with timeout=0, NVGPU_COND_WAIT_INTERRUPTIBLE macro ignores the return code from wait_event_interruptible. As a result we do not detect when the call is interrupted, and the calling process hangs. Use wait_event_interruptible return code in case of infinite timeout. Bug 200384829 Bug 200543218 Change-Id: I930f0d08c73a3b91ab20a6c8faaf633a3d7aee4d Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1982242 (cherry picked from commit 78c513790ac64605cea673c26e6d0d71c3d8db0a) Reviewed-on: https://git-master.nvidia.com/r/2215159 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Tested-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-by: Satish Arora <satisha@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix PMU destroy sequenceAbhiroop Kaginalkar2019-09-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A call to exit the PMU state machine/kthread must be prioritized over any other state change. It was possible to set the state as PMU_STATE_EXIT, signal the kthread and overwrite the state before the kthread has had the chance to exit its loop. This may lead to a "lost" signal, resulting in indefinite wait during the destroy sequence. Faulting sequence: 1. pmu_state = PMU_STATE_EXIT in nvgpu_pmu_destroy() 2. cond_signal() 3. pmu_state = PMU_STATE_LOADING_PG_BUF 4. PMU kthread wakes up 5. PMU kthread processes PMU_STATE_LOADING_PG_BUF 6. PMU kthread sleeps 7. nvgpu_pmu_destroy() waits indefinitely This patch adds a sticky flag to indicate PMU_STATE_EXIT, irrespective of any subsequent changes to pmu_state. The PMU PG init kthread may wait on a call to NVGPU_COND_WAIT_INTERRUPTIBLE, which requires a corresponding call to nvgpu_cond_signal_interruptible() as the core kernel code requires this task mask to wake-up an interruptible task. Bug 2658750 Bug 200532122 Change-Id: I61beae80673486f83bf60c703a8af88b066a1c36 Signed-off-by: Abhiroop Kaginalkar <akaginalkar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2177112 (cherry picked from commit afa49fb073a324c49a820e142aaaf80e4656dcc6) Reviewed-on: https://git-master.nvidia.com/r/2190733 Tested-by: Divya Singhatwaria <dsinghatwari@nvidia.com> Reviewed-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Enabling/disabling FECS trace supportseshendra2019-09-06
| | | | | | | | | | | | | | | | | | | - To enable FECS trace support, nvgpu should set the MSB of the read pointer (MAILBOX1). - The ucode will check if the feature is enabled/disabled before writing a record into the circular buffer. If the feature is disabled, it will not write the record. - If the feature is enabled and the buffer is not allocated, HW will throw a page fault error. Bug 2459186 Bug 200542611 Change-Id: I6f181643737d1cf1bda02077eaa714a3f4ef3d8c Signed-off-by: seshendra <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2189250 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Fix the race between runtime PM and L2 flush"Debarshi Dutta2019-09-05
| | | | | | | | | | | | | | | This patch results in a flaw that doesn't clear the GPU cache. This reverts commit 47f6bc0c2e85d0a8ff943b88c81108ca1bfc588e. Bug 2687410 Change-Id: If78bd7ca29eb5621d4369cbddf21320e2a77a41a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2186886 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use vpr resize APIVedashree Vidwans2019-08-30
| | | | | | | | | | | | | | | | | | | | This patch adds nvgpu API in linux and qnx to query vpr resize. The new API nvgpu_is_vpr_resize_enabled() is used in nvgpu_submit_channel_gpfifo(). Previously, if non-deterministic channel has timeout disabled and GPU cannot railgate on some platform, then channel doesn't power ref count and results in video freeze. This requires non-determinstic channel job tracking to be enabled if vpr resize is supported or if GPU can railgate. Bug 200532122 Change-Id: Icfbff6253762b195b2f5955749343974b1a7a269 Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2167082 Reviewed-on: https://git-master.nvidia.com/r/2180581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support usermode submit buffersKonsta Holtta2019-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Import userd and gpfifo buffers from userspace if provided via NVGPU_IOCTL_CHANNEL_ALLOC_GPFIFO_EX. Also supply the work submit token (i.e., the hw channel id) to userspace. To keep the buffers alive, store their dmabuf and attachment/sgt handles in nvgpu_channel_linux. Our nvgpu_mem doesn't provide such data for buffers that are mainly in kernel use. The buffers are freed via a new API in the os_channel interface. Fix a bug in gk20a_channel_free_usermode_buffers: also unmap the usermode gpfifo buffer. Bug 200145225 Bug 200541476 Change-Id: I8416af7085c91b044ac8ccd9faa38e2a6d0c3946 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1795821 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry picked from commit 99b1c6dcdf328efcfe47338ad1b71a114ab7f272 in dev-main) Reviewed-on: https://git-master.nvidia.com/r/2170603 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add FOREIGN_SGT mem flagKonsta Holtta2019-08-15
| | | | | | | | | | | | | | | | | | | | Add an internal flag NVGPU_MEM_FLAG_FOREIGN_SGT to specify that the sgt member of an nvgpu_mem must not be freed when the nvgpu_mem is freed. Bug 200145225 Bug 200541476 Change-Id: I044fb91a5f9d148f38fb0cbf63d0cdfd64a070ce Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1819801 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry picked from commit 9de6d20abb8fef0cd11c22676846d809ee3f9afc in dev-main) Reviewed-on: https://git-master.nvidia.com/r/2170602 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add usermode_base HALKonsta Holtta2019-08-15
| | | | | | | | | | | | | | | | | | | | Add a HAL function pointer to fifo to for reading the usermode_cfg0 register and implement it for gv11b. Bug 200145225 Bug 200541476 Change-Id: I5f77b15d3b502d9370b1f14129314eaf51a9d7d1 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1811839 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry picked from commit fddb2969240652e1a56089b249684b55430d45c5 in dev-main) Reviewed-on: https://git-master.nvidia.com/r/2170004 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add CHANNEL_SETUP_BIND IOCTLDebarshi Dutta2019-08-15
| | | | | | | | | | | | | | | | | | | | | | | | For a long time now, the ALLOC_GPFIFO_EX channel IOCTL has done much more than just gpfifo allocation, and its signature does not match support that's needed soon. Add a new one called SETUP_BIND to hopefully cover our future needs and deprecate ALLOC_GPFIFO_EX. Change nvgpu internals to match this new naming as well. Bug 200145225 Bug 200541476 Change-Id: I766f9283a064e140656f6004b2b766db70bd6cad Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1835186 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry-picked from e0c8a16c8d474eac6723fea3980833873ab921a6 in dev-main) Reviewed-on: https://git-master.nvidia.com/r/2169882 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use TPC_PG_MASK to powergate the TPCDivya Singhatwaria2019-08-02
| | | | | | | | | | | | | | | | | | | | | | | - In GV11B, read fuse_status_opt_tpc_gpc register to read which TPCs are floorswept. - The driver will also read sysfs node: tpc_pg_mask - Based on these two values "can_tpc_powergate" will be set to true or false and mask will be used to write to fuse_ctrl_opt_tpc_gpc register to powergate the TPC. - can_tpc_powergate = true indicates that the mask value sent from userspace is valid and can be used to power gate the desired TPC - can_tpc_powergate = false indicates that the mask value sent from userspace is not valid and cannot be used to power gate the desired TPC. Bug 200532639 Change-Id: Ib0806e4c96305a13b3574e8063ad8e16770aa7cd Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2159219 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix the race between runtime PM and L2 flushDebarshi Dutta2019-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | gk20a_mm_l2_flush flushes the L2 cache when "struct gk20a->power_on" is true. But it doesn't acquire power lock when doing that, which creates a race that runtime PM might suspend the GPU in the middle of L2 flush. The FB flush looks having the same issue with L2 flushing. This patch fixes that by calling pm_runtime_get_if_in_use at the beginning of the ioctl. This API from PM does a compare and add to the usage count. If the device was not in use, it simply returns without incrementing the usage count as its unnecessary to wake up the GPU(using e.g. a call to gk20a_busy()) as the caches are flushed when the device would be resumed anyways. Bug 2643951 Change-Id: I2417f7ca3223c722dcb4d9057d32a7e065b9e574 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2151532 GVS: Gerrit_Virtual_Submit Reviewed-by: Mark Zhang <markz@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wait for gr.initialized before changing cg/pgDebarshi Dutta2019-05-09
| | | | | | | | | | | | | | | | | | | | | | | | set gr.initialized to false in the beginning of gk20a_gr_reset() and set it to true at the end of successful execution of gk20a_gr_reset. Use gk20a_gr_wait_initialized() to enable/disable cg/pg functions to make sure engine is out of reset and initialized. Bug 2092051 Bug 2429295 Bug 2484211 Bug 1890287 Change-Id: Ic7b0b71382c6d852a625c603dad8609c43b7f20f Signed-off-by: Seema Khowala <seemaj@nvidia.com> Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry-picked from 7e2f124fd12caf37172f12da8de65093622941a5 in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2111038 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add cg and pg functionDebarshi Dutta2019-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new power/clock gating functions that can be called by other units. New clock_gating functions will reside in cg.c under common/power_features/cg unit. New power gating functions will reside in pg.c under common/power_features/pg unit. Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable elpg and also in gr_gk20a_elpg_protected macro to access gr registers. Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled and slcg_enabled thread safe. JIRA NVGPU-2014 Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2025493 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry-picked from c90585856567a547173a8b207365b3a4a3ccdd57 in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2108406 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal to mask/unmask intr during teardownSeema Khowala2019-05-02
| | | | | | | | | | | | | | | | | | | | | | ctxsw timeout error prevents recovery as it can get triggered periodically. Disable ctxsw timeout interrupt to allow recovery. Bug 2092051 Bug 2429295 Bug 2484211 Bug 1890287 Change-Id: I47470e13968d8b26cdaf519b62fd510bc7ea05d9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2019645 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry picked from commit 68c13e2f0447118d7391807c9b9269749d09a4ec in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2024899 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: using pmu counters for load estimatePeng Liu2019-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | PMU counters #0 and #4 are used to count total cycles and busy cycles. These counts are used by podgov to estimate GPU load. PMU idle intr status register is used to monitor overflow. Overflow rarely occurs because frequency governor reads and resets the counters at a high cadence. When overflow occurs, 100% work load is reported to frequency governor. Bug 1963732 Change-Id: I046480ebde162e6eda24577932b96cfd91b77c69 Signed-off-by: Peng Liu <pengliu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1939547 (cherry picked from commit 34df0035194e0203f68f679acdd84e5533a48149) Reviewed-on: https://git-master.nvidia.com/r/1979495 Reviewed-by: Aaron Tian <atian@nvidia.com> Tested-by: Aaron Tian <atian@nvidia.com> Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com> Tested-by: Rajkumar Kasirajan <rkasirajan@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove gk20a_is_channel_marked_as_tsgSeema Khowala2019-03-18
| | | | | | | | | | | | | | | | | | | | | Use tsg_gk20a_from_ch to get tsg pointer for tsgid of a channel. For invalid tsgid, tsg pointer will be NULL Bug 2092051 Bug 2429295 Bug 2484211 Change-Id: I82cd6a2dc5fab4acb147202af667ca97a2842a73 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2006722 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry picked from commit 13f37f9c70b9ae2e0d179830cded93a0a6f86494 in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2025507 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: do not use raw spinlock for ch->timeout.lockSeema Khowala2019-02-18
| | | | | | | | | | | | | | | | | | | | | | | | | | With PREEMPT_RT kernel, regular spinlocks are mapped onto sleeping spinlocks (rt_mutex locks), and raw spinlocks retain their behaviour. Schedule while atomic can occur in gk20a_channel_timeout_start, as it acquires ch->timeout.lock raw spinlock, and then calls functions that acquire ch->ch_timedout_lock regular spinlock. Bug 200484795 Change-Id: Iacc63195d8ee6a2d571c998da1b4b5d396f49439 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2004100 (cherry picked from commit aacc33bb47aa8019c1a20b867d3722c241f7f93a in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2017923 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Debarshi Dutta <ddutta@nvidia.com> Tested-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>