summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Delete unused regops dataAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | Flaged by CLANG, this data is unused and may now be deleted. JIRA NVGPU-525 Change-Id: Idf232b98aa3dfa6b03d29ec8b38cde58de20d29f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673819 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: WAR unlikely() bug in CLANGAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLANG, when compiling regops_gk20a.c sees the following warning: ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: error: equality comparison with extraneous parentheses [-Werror,-Wparentheses-equality] if (unlikely(skip_read_lo == false)) { ~~~~~~~~~~~~~^~~~~~~~ ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: note: remove extraneous parentheses around the comparison to silence this warning if (unlikely(skip_read_lo == false)) { ~ ^ ~ ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: note: use '=' to turn this equality comparison into an assignment if (unlikely(skip_read_lo == false)) { ^~ = 1 error generated. But this obviously is fine. However, it's simple enough to work around by just deleting the unlikely() call. We don't do anything with that anyway. JIRA NVGPU-525 Change-Id: I674855ad08daf65ac6d79ceab7d4f56f637d4437 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673818 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use proper signage in shift operationAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | This fails with a warning when compiling with CLANG. JIRA NVGPU-525 Change-Id: Ied04e1683d1740d7f946902edc93299d223564fc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673817 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: channel_gk20a.c cleanup for POSIXAlex Waterman2018-03-30
| | | | | | | | | | | | | | Remove a variable which is assigned to but never used. Add proper include (<nvgpu/log2.h>) for ilog2(). JIRA NVGPU-525 Change-Id: I42f3fddad9c294dc64343082e1dbd44b19120089 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673816 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.setup_swSourab Gupta2018-03-29
| | | | | | | | | | | | | bar1/userd setup is different for RM server. created common function gk20a_init_fifo_setup_sw_common. Jira VQRM-3058 Change-Id: I655b54e21ed5f15dcb8e7b01bd9cd129b35ae7a3 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665691 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.set_error_notifierRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server overrides it for handling stall interrupts. Jira VQRM-3058 Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.channel_suspend/channel_resumeRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server acts differently for channel suspend/resume. Jira VQRM-3058 Change-Id: If41e3099164654db448d1157fd7f51dd00c5e201 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.check_tsg_ctxsw_timeout/check_ch_ctxsw_timeoutRichard Zhao2018-03-29
| | | | | | | | | | | | | RM Server acts differently for ctxsw timeout check. It won't check GP_GET or accumulated timeouts, but notify guest and go to recovery. Jira VQRM-3058 Change-Id: I428aea34dc517311eb7e73feb556145e916309fb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.ch_abort_clean_upRichard Zhao2018-03-29
| | | | | | | | | | | | | | Channel abort clean up is only needed by native and vgpu driver but not RM server. RM server expects guest will clean up itself. RM server should not set the callback. Jira VQRM-3058 Change-Id: I11b49b6f2d51c871e31de16955d487dca82609cb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make fifo/ch functions called by RM Server globalSourab Gupta2018-03-29
| | | | | | | | | | | | | | | | | | The patch declares globally few channel/fifo HAL functions required for QNX code compilation (as they are being referred elsewhere in QNX code). This is required as a part of bringing in the nvgpu Channel/FIFO HAL into QNX. Jira VQRM-3058 Change-Id: Ia176535b64de981d2f7ddb20f62015a0da74fd2a Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662411 GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance pbus error reportingSeema Khowala2018-03-29
| | | | | | | | | | | | | | | | -Dump timeout save0 and save1 even if they could be unreliable when fecs_tgt in set in save0 . This is good to have for debug purposes. -Add priv_ring hal for decode_error_code -Decode fecs error code for supported error types Bug 1998067 Change-Id: I60cb6902d099df4a7df45fa624e44d9e0d46360f Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683014 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use gpc_tpc_count[gpc] for number of tpc in a gpcSeema Khowala2018-03-29
| | | | | | | | | | | | | | | | | | | | | Using tpc_count instead of gpc_tpc_count indexed by gpc, will result in pbus error with decode error or client floorswept error codes. tpc_count represents total number of tpc while gpc_tpc_count[gpc] represents number of tpc in the indexed gpc. Bug 1998067 Change-Id: I9adfb98a6c3e209cbb02a8cd5090f6b6adc1ec4b Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682469 Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix priv error register readsThomas Fleury2018-03-28
| | | | | | | | | | | | | | | | | | | | | | | | Current code does not compute priv error register offsets properly. This leads to invalid decoding of priv errors, and can also trigger additional priv errors. - add GPU_LIT_GPC_PRIV_STRIDE define - return proj_gpc_priv_stride for GPU_LIT_GPC_PRIV_STRIDE in hals - use GPU_LIT_GPC_PRIV_STRIDE instead of GPU_LIT_GPC_STRIDE in g->ops.priv_ring.isr() to compute priv error register offsets. Bug 2093058 Change-Id: Ia7c36ccba0441126784bb0e00452f2cf1196ef71 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682118 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Seema Khowala <seemaj@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: simplify job semaphore release in abortKonsta Holtta2018-03-28
| | | | | | | | | | | | | | | | | | | Instead of looping all jobs and releasing their semaphores separately, do just one semaphore release. All the jobs are using the same sema index, and the final, maximum value of it is known. Move also this resetting into ch->sync->set_min_eq_max() to be consistent with syncpoints. Change-Id: I03601aae67db0a65750c8df6b43387c042d383bd Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680362 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete semaphore release supportKonsta Holtta2018-03-27
| | | | | | | | | | | | | | | | | | | | | | | | Semaphores don't need to be released from CPU anymore, so clarify the code by deleting nvgpu_semaphore_release() and refactoring __nvgpu_semaphore_release() to nvgpu_semaphore_reset() that only "fast-forwards" the semaphore to a later value. While doing this, the meaning of nvgpu_semaphore_incr() changes, so rename it to nvgpu_semaphore_prepare(). Now it's only used to prepare an nvgpu_semaphore for a value that the HW will increment the sema to. Also change the BUG_ON that guards sema double-inits into just WARN_ON. Change-Id: I6f6df368ec5436cc97a229697742b6a4115dca51 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680361 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reset streaming on perfbuf_enable and perfbuf_disableMartin Radev2018-03-26
| | | | | | | | | | | | | | | | | Similarly to css_hw_(enable|disable)_snapshot the HWPM state should be reset on perfbuf_enable and perfbuf_disable to avoid leaking snapshot data into a freshly mapped buffer. Bug 1960846 Change-Id: I94826b209ef4b8cb6ad44d3b8667745270c6a7e1 Signed-off-by: Martin Radev <mradev@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: disallow invalid syncpoint wait idsKonsta Holtta2018-03-23
| | | | | | | | | | | Instead of ignoring a wait when a raw syncpoint prefence has an invalid id, reject the submit with -EINVAL just like with syncpoints in syncfds. Change-Id: I9b5c417bd1c7cd081c79659d088ac2c915de8c0e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680281 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allow syncfds as prefences on deterministicKonsta Holtta2018-03-23
| | | | | | | | | | | | | | | | | | | | | | Accept submits on deterministic channels even when the prefence is a syncfd, but only if it has just one fence inside. Because NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE is shared between pre- and postfences, a postfence (SUBMIT_GPFIFO_FLAGS_FENCE_GET) is not allowed at the same time though. The sync framework is problematic for deterministic channels due to certain allocations that are not controlled by nvgpu. However, that only applies for postfences, yet we've disallowed FLAGS_SYNC_FENCE for deterministic channels even when a postfence is not needed. Bug 200390539 Change-Id: I099bbadc11cc2f093fb2c585f3bd909143238d57 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680271 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: set safe state for user managed syncpointsDeepak Nibade2018-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | MAX/threshold value of user managed syncpoint is not tracked by nvgpu So if channel is reset by nvgpu there could be waiters still waiting on some user syncpoint fence Fix this by setting a large safe value to user managed syncpoint when aborting the channel and when closing the channel We right now increment the current value by 0x10000 which should be sufficient to release any pending waiter Bug 200326065 Jira NVGPU-179 Change-Id: Ie6432369bb4c21bd922c14b8d5a74c1477116f0b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1678768 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix PMA list alignment in ctxsw bufferDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in ctxsw buffer to be 256 byte aligned but same change is not applied to other chip ucodes ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other chips except GV100 Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix the required alignment in this function Bug 1998067 Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676655 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entriesDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LIST_nv_pm_fbpa_ctx_regs, we right now call add_ctxsw_buffer_map_entries_subunits() to add registers corresponding to all the FBPAs But while configuring total number of registers, we do not consider floorswept FBPAs and that causes misalignment in subsequent lists for GV100 Fix this by reading disabled/floorswept FBPAs from fuse and consider only those FBPAs which are active for GV100 Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100 Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned implementation to consider floorsweeping Bug 1998067 Change-Id: Id560551bb0b8142791c117b6d27864566c90b489 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676654 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete unused job->pre_fenceKonsta Holtta2018-03-19
| | | | | | | | | | | | | | | | | | | The pre_fence member in channel_gk20a_job is no longer used for anything. Delete it. Only the post fence needs to be tracked. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: Ia1a556728dabf9a8e305ed76020ac1aa0b4d6b88 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676735 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove fence param from channel_syncKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | The fence parameter that gets output from gk20a_channel_sync's wait() and wait_fd() APIs is no longer used for anything. Delete it. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: I659504062dc6aee83a0a0d9f5625372b4ae8c0e2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676734 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove support for foreign sema syncfdsKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | Delete the proxy waiter for non-semaphore-backed syncfds in sema wait path to simplify code, to remove dependencies to the sync framework (and thus Linux) and to support upcoming refactorings. This feature has never been used for actually foreign fences. Jira NVGPU-43 Jira NVGPU-66 Change-Id: I2b539aefd2d096a7bf5f40e61d48de7a9b3dccae Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665119 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use asid only under CONFIG_SYNC in channel_sync_gk20a.cAlex Waterman2018-03-16
| | | | | | | | | | | | | | This variable is only ever used under the CONFIG_SYNC config so make sure that we only define/assign to it when CONFIG_SYNC is enabled. JIRA NVGPU-525 Change-Id: I27160adbd6a46f58e21f24ab19d37966ded5e7de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673812 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gpu_va to update_hwpm_ctxsw_mode parameters()Aparna Das2018-03-16
| | | | | | | | | | | | | | | | | It'll allow the function to use fixed mapping. Jira VQRM-2982 Change-Id: I98159c5b199ce1854b1b40704392237cadb71ef2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wait for all prefence semas on gpuKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pre-fence wait for semaphores in the submit path has supported a fast path for fences that have only one underlying semaphore. The fast path just inserts the wait on this sema to the pushbuffer directly. For other fences, the path has been using a CPU wait indirection, signaling another semaphore when we get the CPU-side callback. Instead of only supporting prefences with one sema, unroll all the individual semaphores and insert waits for each to a pushbuffer, like we've already been doing with syncpoints. Now all sema-backed syncs get the fast path. This simplifies the logic and makes it more explicit that only foreign fences need the CPU wait. There is no need to hold references to the sync fence or the semas inside: this submitted job only needs the global read-only sema mapping that is guaranteed to stay alive while the VM of this channel stays alive, and the job does not outlive this channel. Jira NVGPU-43 Jira NVGPU-66 Jira NVGPU-513 Change-Id: I7cfbb510001d998a864aed8d6afd1582b9adb80d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1636345 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv10x volt rail boardobj changesMahantesh Kumbar2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created volt ops under pmu_ver to support volt_set_voltage, volt_get_voltage & volt_send_load_cmd_to_pmu. - Renamed volt load, set_voltage & get_voltage gp10x method names. - Added new volt load, set_voltage & get_voltage methods for gv10x using RPC & added code to handle ack in pmu_rpc_handler() along with struct rail_list changes. - Updated volt ops of gp106 & gv100 to point to respective methods. - Added member volt_dev_idx_ipc_vmin & volt_scale_exp_pwr_equ_idx to "struct nv_pmu_volt_volt_rail_boardobj_set" & "struct voltage_rail" made changes to update members as needed. - Added member volt_scale_exp_pwr_equ_idx to "struct vbios_voltage_rail_table_1x_entry" to read value from VBIOS table & update rail boardobj set interface. - Defines for volt RPC "NV_PMU_RPC_ID_VOLT_*" - Define struct's volt load, set_voltage & get_voltage to execute volt RPC. Change-Id: I4a41adcf7536468beaa8a73f551b1d608aabd161 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1659728 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Updated RPC to support copyback & callbackMahantesh Kumbar2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Updated & added new parameter "bool is_copy_back" to nvgpu_pmu_rpc_execute() to support copy back processed RPC request from PMU to caller by passing parameter value true & this blocks method till it receives ACK from PMU for requested RPC. - Added "struct rpc_handler_payload" to hold info required for RPC handler like RPC buff address & clear memory if copy back is not requested. - Added define PMU_RPC_EXECUTE_CPB to support to copy back processed RPC request from PMU to caller. - Updated RPC callback handler support, crated memory & assigned default handler if callback is not requested else use callback parameters data to request to PMU. - Added define PMU_RPC_EXECUTE_CB to support callback - Updated pmu_wait_message_cond(), restricted condition check to 8-bit instead 32-bit condition check. Change-Id: Ic05289b074954979fd0102daf5ab806bf1f07b62 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664962 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fault_ch to record_sm_error_stateShashank Singh2018-03-13
| | | | | | | | | | | | | | fault_ch is needed by rm-server to send the notification to guest VM. rm-server is going to use gr sources from linux Jira VQRM-2982 Change-Id: Ifb6e8a9630a471d07b89ffaa7f2ceb309220fd21 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661665 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: decouple sema and hw semaKonsta Holtta2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | struct nvgpu_semaphore represents (mainly) a threshold value that a sema at some index will get and struct nvgpu_semaphore_int (aka "hw_sema") represents the allocation (and write access) of a semaphore index and the next value that the sema at that index can have. The threshold object doesn't need a pointer to the sema allocation that is not even guaranteed to exist for the whole threshold lifetime, so replace the pointer by the position of the sema in the sema pool. This requires some modifications to pass a hw sema around explicitly because it now represents write access more explicitly. Delete also the index field of semaphore_int because it can be directly derived from the offset in the sema location and is thus unnecessary. Jira NVGPU-512 Change-Id: I40be523fd68327e2f9928f10de4f771fe24d49ee Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658102 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add placeholder for IPA to PAThomas Fleury2018-03-13
| | | | | | | | | | | | | | | | Add __nvgpu_sgl_phys function that can be used to implement IPA to PA translation in a subsequent change. Adapt existing function prototypes to add pointer to gpu context, as we will need to check if IPA to PA translation is needed. JIRA EVLR-2442 Bug 200392719 Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673142 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hal for syncpt_incr_per_releaseseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | Create hal to indicate syncpt increments per release. Legacy chip uses 2 syncpt increments per release and gv1xx onwards uses 1 syncpt increment per release. Bug 2066025 Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669817 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add refcounting for ctxsw disable/enableShashank Singh2018-03-12
| | | | | | | | | | | | | | | | | | | | ctxsw disable could be called recursively for RM server. Suspend contexts disables ctxsw at the beginning, then call tsg disable and preempt. If preempt timeout happens, it goes to recovery path, which will try to disable ctxsw again. More details on Bug 200331110. Jira VQRM-2982 Change-Id: I4659c842ae73ed59be51ae65b25366f24abcaf22 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1671716 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support per-channel wdt timeoutsKonsta Holtta2018-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the padding in nvgpu_channel_wdt_args with a timeout value in milliseconds, and add NVGPU_IOCTL_CHANNEL_WDT_FLAG_SET_TIMEOUT to signify the existence of this new field. When the new flag is included in the value of wdt_status, the field is used to set a per-channel timeout to override the per-GPU default. Add NVGPU_IOCTL_CHANNEL_WDT_FLAG_DISABLE_DUMP to disable the long debug dump when a timed out channel gets recovered by the watchdog. Printing the dump to serial console takes easily several seconds. (Note that there is NVGPU_TIMEOUT_FLAG_DISABLE_DUMP about ctxsw timeout separately for NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX as well.) The behaviour of NVGPU_IOCTL_CHANNEL_WDT is changed so that either NVGPU_IOCTL_CHANNEL_ENABLE_WDT or NVGPU_IOCTL_CHANNEL_DISABLE_WDT has to be set. The old behaviour was that other values were silently ignored. The usage of the global default debugfs-controlled ch_wdt_timeout_ms is changed so that its value takes effect only for newly opened channels instead of in realtime. Also, zero value no longer means that the watchdog is disabled; there is a separate flag for that after all. gk20a_fifo_recover_tsg used to ignore the value of "verbose" when no engines were found. Correct this. Bug 1982826 Bug 1985845 Jira NVGPU-73 Change-Id: Iea6213a646a66cb7c631ed7d7c91d8c2ba8a92a4 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1510898 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: boardobj update for gv10x branchMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created ops for below boardobj methods to support gp10x & gv10x branch boardobj changes, and defined methods for gv10x with postfix _v1 with below names boardobjgrp_pmucmd_construct_impl boardobjgrp_pmuset_impl boardobjgrp_pmugetstatus_impl is_boardobjgrp_pmucmd_id_valid - These ops are assigned based on PMU version to respective chip. - Modified BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT & BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to support gp10x & gv10x branch changes - Updated struct boardobjgrp_pmu_cmd to include members needed for gv10x boardobj changes - Created "struct nv_pmu_rpc_struct_board_obj_grp_cmd" to execute BOARD_OBJ_GRP_CMD using RPC. - Defined method boardobjgrp_pmucmdsend_rpc() to send BOARD_OBJ_GRP_CMD to PMU. Change-Id: If2551bdda80e897e7b21d2966881586f3bbc7a9b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656511 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU super surface supportMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Added ops "pmu.alloc_super_surface" to create memory space for pmu super surface - Defined method nvgpu_pmu_sysmem_surface_alloc() to allocate pmu super surface memory & assigned to "pmu.alloc_super_surface" for gv100 - "pmu.alloc_super_surface" set to NULL for gp106 - Memory space of size "struct nv_pmu_super_surface" is allocated during pmu sw init setup if "pmu.alloc_super_surface" is not NULL & free if error occur. - Added ops "pmu_ver.config_pmu_cmdline_args_super_surface" to describe PMU super surface details to PMU ucode as part of pmu command line args command if "pmu.alloc_super_surface" is not NULL. - Updated pmu_cmdline_args_v6 to include member "struct flcn_mem_desc_v0 super_surface" - Free allocated memory for PMU super surface in nvgpu_remove_pmu_support() method - Added "struct nvgpu_mem super_surface_buf" to "nvgpu_pmu" struct - Created header file "gpmu_super_surf_if.h" to include interface about pmu super surface, added "struct nv_pmu_super_surface" to hold super surface members along with rsvd[x] dummy space to sync members offset with PMU super surface members. Change-Id: I2b28912bf4d86a8cc72884e3b023f21c73fb3503 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656571 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Enable IO coherency on GV100Alex Waterman2018-03-07
| | | | | | | | | | | | | | This reverts commit 848af2ce6de6140323a6ffe3075bf8021e119434. This is a revert of a revert, etc, etc. It re-enables IO coherence again. JIRA EVLR-2333 Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669722 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: BUG_ON for sema increment, not valueKonsta Holtta2018-03-07
| | | | | | | | | | | | | | | | When adding a sema wait to a pushbuf, verify that the sema threshold has been incremented from the original value by reading the incremented field instead of value (which is set to nonzero by nvgpu_semaphore_incr()). Value could be 0 even after an increment if new semas weren't reset to 0. Jira NVGPU-514 Change-Id: I295451fbc7eb9e597aea12d73074e99f74a6a899 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658100 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle semaphore pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle semaphore pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I5fee5a491c6e54344f9da477eaf5881c50335bbc Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658298 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.semaphore_wakeup HALRichard Zhao2018-03-06
| | | | | | | | | | | | | | | | | vserver handles semaphore differently from native, so it needs a callback to differentiate from native. Also created common function mc_gk20a_handle_intr_nonstall to handle all nonstall interrupts. Jira VQRM-2982 Change-Id: I1b3821717a4005ca4bf2a4dac5dcd335872f48f1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656753 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add debugger.post_events HAL opAparna Das2018-03-06
| | | | | | | | | | | | RM Server will need to set specific HAL op and notify vgpu client. Jira VQRM-2982 Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make gr functions that are used by vsrv globalRichard Zhao2018-03-06
| | | | | | | | | | | | Fixed vsrv link errors for gr unification. Jira VQRM-2982 Change-Id: Icd46792191f1a9aaefbf86d2f3c0b4d5bce2384e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle post event idAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr post event id needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658382 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle notify pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle notify pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I4cb88d4d769a5d5cb98a4ee6ac3fbb74245cb5f2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658255 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op for gr set error notifierAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr set error notifier needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: Ia445a27112bb6c5587dbb81100a9dafe5875b338 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657830 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move chip detect in os specific probe codeAparna Das2018-03-06
| | | | | | | | | | | | | | | | | | This allows moving HAL overrides for vserver out of common chip specific HAL files into os specific probe code. Jira VQRM-3070 Change-Id: Icc61aacc03ac7db7a0ea1f6a2dd2b76185c74757 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656752 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: implement gfxp wfi controlsKirill Artamonov2018-03-06
| | | | | | | | | | | | | | | | | | | | /sys/devices/gpu.0/gfxp_wfi_timeout_unit usec - microseconds sysclk - gpu clock count Treat gr_fe_gfxp_wfi_timeout_r as context-switched register on gv11b. Set default gfxp_wfi_timeout to 100 usec to match gp10b at 1GHz. bug 1888344 Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed Reviewed-on: https://git-master.nvidia.com/r/1651517 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: modify logic for fecs arb idle checkseshendra Gadagottu2018-03-06
| | | | | | | | | | | | | | | | | Check these conditions for fecs arbiter idle: 1. Wait for gr_fecs_arb_ctx_cmd to idle 2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle Just waiting for condition 1, only guarantees dispatching of command but not completion of work. Bug 2056266 Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667550 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clean up memory leaks in gr initSeema Khowala2018-03-06
| | | | | | | | | | | | | | | | | | | | | | This is to resolve memory leak after modifying tpc_fs_mask sysfs which sets gr.sw_ready to false and forces gr to re-initialize. echo 1 > /sys/devices/gpu.0/force_idle echo 5 > /sys/devices/gpu.0/tpc_fs_mask echo 0 > /sys/devices/gpu.0/force_idle Bug 200393029 Change-Id: I76299f53fc87823071c672ec682c3eb51f72f513 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1666018 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>