summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* gpu: nvgpu: delete unused job->pre_fenceKonsta Holtta2018-03-19
| | | | | | | | | | | | | | | | | | | The pre_fence member in channel_gk20a_job is no longer used for anything. Delete it. Only the post fence needs to be tracked. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: Ia1a556728dabf9a8e305ed76020ac1aa0b4d6b88 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676735 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix boardobjgrp getstatus error on gv10xMahantesh Kumbar2018-03-19
| | | | | | | | | | | | | | | | | | | | Req : Some boardobjgrp don't need getstatus support, so boardobjgrp pmu cmd not constructed for those boardobjgrp. Don't request memory alloc if boardobjgrp pmu cmd not constructed & should exit cleanly without allocating memory. Fix: Don't request memory alloc if boardobjgrp pmu cmd not constructed by checking "sturct boardobjgrp_pmu_cmd" member "fbsize" value. Change-Id: I610d6812ec1d1bcf7ea38645236601b3e5672650 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674191 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use correct PD for determining next apertureAlex Waterman2018-03-17
| | | | | | | | | | | | | | | | | | | | When generating the aperture field for the PDE being programmed we must use the next PD not the current PD. This is important for cases on the dGPU where VIDMEM runs out. In such cases the page table may reside in both VIDMEM and SYSMEM. Thus, if a PDE points to a PDE in a different type of memory (VIDMEM -> SYSMEM or SYSMEM -> VIDMEM) then the aperture will not be programmed correctly if the code uses the current PD for picking the next PD aperture. Bug 2082475 Change-Id: Ic1a8d1e2c2237712039dc298b97095d3bbc6c844 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676831 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove fence param from channel_syncKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | The fence parameter that gets output from gk20a_channel_sync's wait() and wait_fd() APIs is no longer used for anything. Delete it. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: I659504062dc6aee83a0a0d9f5625372b4ae8c0e2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676734 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add coherent case in gp10b_get_pde0_pgszThomas Fleury2018-03-16
| | | | | | | | | | | | | | | | | gp10b_get_pde0_pgsz computes pgsz depending on aperture and address, but it was not handling sysmem coherent case. Bug 2082475 Change-Id: I095acb05e3f917518368b879f5839f8e9dbcd8ea Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676255 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove support for foreign sema syncfdsKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | Delete the proxy waiter for non-semaphore-backed syncfds in sema wait path to simplify code, to remove dependencies to the sync framework (and thus Linux) and to support upcoming refactorings. This feature has never been used for actually foreign fences. Jira NVGPU-43 Jira NVGPU-66 Change-Id: I2b539aefd2d096a7bf5f40e61d48de7a9b3dccae Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665119 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: remove aggressive_sync_destroy_thresh check for user ↵Deepak Nibade2018-03-16
| | | | | | | | | | | | | | | | | | | | | | syncpoint" This reverts commit fb40f2a80739985abac273bc493e07341aa003af. aggressive_sync_destroy_thresh was inadvertently set for gv11b vGPU, and that is now being removed hence restore original check Bug 200397265 Bug 200326065 Change-Id: If56e1c462adb2db7d9186fbb6038169aa7ea33dc Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676556 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove aggressive_sync_destroy_thresh for gv11bDeepak Nibade2018-03-16
| | | | | | | | | | | | | | | | | | | | aggressive_sync_destroy_thresh was inadvertently set for gv11b on vGPU, and that caused issues while allocating user managed syncpoint remove that threshold as it is no longer needed Bug 200397265 Bug 200326065 Change-Id: I63dfdcae1fd7b99068d07807c84775b9a9f9f95d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676555 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Don't ioremap() regs when using POSIXAlex Waterman2018-03-16
| | | | | | | | | | | | | | | | | | | | | When __NVGPU_POSIX__ is defined do no use ioremap(). This operation probably doesn't make much sense. Currently we have no plans to run the driver in userspace against a real GPU, hence programming the nvlink credits registers is simply not necessary. Also fix an unused variable by returing it as an error. JIRA NVGPU-525 Change-Id: Ic94d332551f6e25c1836331bf92188e7651546cb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673815 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: convert debug function to use nvgpu_info()Alex Waterman2018-03-16
| | | | | | | | | | | | | | | | A RPFB debug function was still using pr_info() instead of nvgpu_info() so make that conversion. JIRA NVGPU-525 Change-Id: Ib157dfd2f743374215bc16230c7f422601133d2f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673814 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup macro in clk_gm20b.cAlex Waterman2018-03-16
| | | | | | | | | | | | | | | | Cleanup a macro in clk_gm20b.c to not use pr_info() - instead use nvgpu_info(). Also add necessary includes. JIRA NVGPU-525 Change-Id: I2dcaf41c1e31131acf63b24b33b5a24795128024 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673813 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use asid only under CONFIG_SYNC in channel_sync_gk20a.cAlex Waterman2018-03-16
| | | | | | | | | | | | | | This variable is only ever used under the CONFIG_SYNC config so make sure that we only define/assign to it when CONFIG_SYNC is enabled. JIRA NVGPU-525 Change-Id: I27160adbd6a46f58e21f24ab19d37966ded5e7de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673812 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract get_cycles()Alex Waterman2018-03-16
| | | | | | | | | | | | | | | | | | | | | | | get_cycles is a linux specific API used in common code. This API is being used, it seems, as a method to generate time stamps. So add an API to generate 'high resolution' time stamps. This API returns an opaque time stamp: that is not something one may use directly as a time since in the Linux implementation we just use this cycle counter. Other implementations will, of course, be free to implement as a real time stamp. JIRA NVGPU-525 Change-Id: I237aac9bd6c795d000459025bdb4fce92e8aaa3d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673811 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gpu_va to update_hwpm_ctxsw_mode parameters()Aparna Das2018-03-16
| | | | | | | | | | | | | | | | | It'll allow the function to use fixed mapping. Jira VQRM-2982 Change-Id: I98159c5b199ce1854b1b40704392237cadb71ef2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wait for all prefence semas on gpuKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pre-fence wait for semaphores in the submit path has supported a fast path for fences that have only one underlying semaphore. The fast path just inserts the wait on this sema to the pushbuffer directly. For other fences, the path has been using a CPU wait indirection, signaling another semaphore when we get the CPU-side callback. Instead of only supporting prefences with one sema, unroll all the individual semaphores and insert waits for each to a pushbuffer, like we've already been doing with syncpoints. Now all sema-backed syncs get the fast path. This simplifies the logic and makes it more explicit that only foreign fences need the CPU wait. There is no need to hold references to the sync fence or the semas inside: this submitted job only needs the global read-only sema mapping that is guaranteed to stay alive while the VM of this channel stays alive, and the job does not outlive this channel. Jira NVGPU-43 Jira NVGPU-66 Jira NVGPU-513 Change-Id: I7cfbb510001d998a864aed8d6afd1582b9adb80d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1636345 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove aggressive_sync_destroy_thresh check for user syncpointDeepak Nibade2018-03-15
| | | | | | | | | | | | | | | | | | VGPU has set aggressive_sync_destroy_thresh even for GV11B, and that breaks allocation of user managed syncpoint on VGPU Remove this check for now until some solution is finalized Bug 200397265 Bug 200326065 Change-Id: Idd765cfdd40b9055d9e083d59c85c84d8b213ee9 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1675678 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
* nvgpu: Remove ASYNC PROBE for vgpuNagaraj P N2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | | Async probe of vgpu driver results in a race condition where GICD registers are being programmed incorrectly because of the race. Remove ASYNC_PROBE for vgpu driver as a WAR to prevent it. This change would be reverted after GICD register programming is serialized bug 200385192 Change-Id: I7279152867470ece93c5efbd72ac24db28878024 Signed-off-by: Nagaraj P N <nagarajp@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674898 Reviewed-by: Sreenivasulu Velpula <svelpula@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vipin Kumar <vipink@nvidia.com> Tested-by: Vipin Kumar <vipink@nvidia.com> Reviewed-by: Sandeep Trasi <strasi@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gp10x PMU f/w version updateMahantesh Kumbar2018-03-15
| | | | | | | | | | | | | | | | | - Updating gp10x PMU f/w version for ucode git cl : https://git-master.nvidia.com/r/#/c/1674816/ P4 CL# : 23732390 Change-Id: I4426f7fc96b52f342ac885199e7dd3e413af4a8e Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674857 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv10x volt policy boardobj changesMahantesh Kumbar2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Added support for single rail multi step volt policy & below are the list of define & struct added/updated to support same. CTRL_VOLT_POLICY_TYPE_SINGLE_RAIL_MULTI_STEP 0x04, NV_VBIOS_VOLTAGE_POLICY_1X_ENTRY_TYPE_SINGLE_RAIL_MULTI_STEP 0x04, Updated struct vbios_voltage_policy_table_1x_entry, struct nv_pmu_volt_volt_policy_sr_multi_step_boardobj_set, this holds members which help to config single rail multi step like delay between switch step, ramp up & ramp down step size in uv. - Added case to support SINGLE_RAIL_MULTI_STEP in volt_volt_policy_construct() based on boardobj type. - Added case to support SINGLE_RAIL_MULTI_STEP in volt_get_volt_policy_table() to read data from VBIOS table vbios_voltage_policy_table_1x_entry & extract to voltage_policy_single_rail_multi_step. - Added methods to forward single rail multi step data to PMU using below methods by assigning data read from VBIOS voltage_policy_single_rail_multi_step to nv_pmu_volt_volt_policy_sr_multi_step_boardobj_set interface. volt_construct_volt_policy_single_rail_multi_step() volt_policy_pmu_data_init_sr_multi_step() volt_policy_pmu_data_init_single_rail() construct_volt_policy_single_rail() Change-Id: I17bc8c320777191611365ee63274c38ffe5ecbf7 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660687 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv10x volt rail boardobj changesMahantesh Kumbar2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created volt ops under pmu_ver to support volt_set_voltage, volt_get_voltage & volt_send_load_cmd_to_pmu. - Renamed volt load, set_voltage & get_voltage gp10x method names. - Added new volt load, set_voltage & get_voltage methods for gv10x using RPC & added code to handle ack in pmu_rpc_handler() along with struct rail_list changes. - Updated volt ops of gp106 & gv100 to point to respective methods. - Added member volt_dev_idx_ipc_vmin & volt_scale_exp_pwr_equ_idx to "struct nv_pmu_volt_volt_rail_boardobj_set" & "struct voltage_rail" made changes to update members as needed. - Added member volt_scale_exp_pwr_equ_idx to "struct vbios_voltage_rail_table_1x_entry" to read value from VBIOS table & update rail boardobj set interface. - Defines for volt RPC "NV_PMU_RPC_ID_VOLT_*" - Define struct's volt load, set_voltage & get_voltage to execute volt RPC. Change-Id: I4a41adcf7536468beaa8a73f551b1d608aabd161 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1659728 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: init soc vars from nvgpu_probeThomas Fleury2018-03-14
| | | | | | | | | | | | | | | | | | | Invoke nvgpu_init_soc_vars from common nvgpu_probe instead of pci specific nvgpu_pci_tegra_probe. Bug 200392719 Change-Id: Ibb0474f2497234ba2e393790020af89a0266f5df Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674016 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Bhosale <dbhosale@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Pass correct va_allocated field in .gmmu_unmap()Alex Waterman2018-03-14
| | | | | | | | | | | | | | | | | | | | | When nvgpu maps an nvgpu_mem struct the nvgpu driver has a choice of either using a fixed or non-fixed mapping. For non-fixed mappings the GMMU APIs allocate a VA space for the caller. In that case the GMMU APIs must also free that VA range when nvgpu unmaps the nvgpu_mem. For fixed mappings the GMMU APIs must instead not manage the life time of the VA space. To support these two possibilities add a field to nvgpu_mem that specifies whether the GMMU APIs must or must not free the GPU VA range during the GMMU unmap operation. Also fix a case in the nvgpu vm_area code that would double free a VA allocation in some cases (sparse allocs). Change-Id: Idc32dbb8208fa7c1c05823e67b54707fea51c6b7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669920 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Updated RPC to support copyback & callbackMahantesh Kumbar2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Updated & added new parameter "bool is_copy_back" to nvgpu_pmu_rpc_execute() to support copy back processed RPC request from PMU to caller by passing parameter value true & this blocks method till it receives ACK from PMU for requested RPC. - Added "struct rpc_handler_payload" to hold info required for RPC handler like RPC buff address & clear memory if copy back is not requested. - Added define PMU_RPC_EXECUTE_CPB to support to copy back processed RPC request from PMU to caller. - Updated RPC callback handler support, crated memory & assigned default handler if callback is not requested else use callback parameters data to request to PMU. - Added define PMU_RPC_EXECUTE_CB to support callback - Updated pmu_wait_message_cond(), restricted condition check to 8-bit instead 32-bit condition check. Change-Id: Ic05289b074954979fd0102daf5ab806bf1f07b62 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664962 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add tsg_id to vgpu_gr_ctx structShashank Singh2018-03-13
| | | | | | | | | | | | | | | | | | | | To reuse linux gr code for QNX tsg_id will be required during alloc_gr_ctx. rm-server will reuse the gr_ctx from tsg and would not allocate it. Jira VQRM-2982 Change-Id: I236deb181b89a38e70dedca4190a4275be9f0b28 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1659907 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Sourab Gupta <sourabg@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: change commit_inst sequence in gr_allocShashank Singh2018-03-13
| | | | | | | | | | | | | | | | | | | | | Since rm-server is going to use gr sources from linux including the subctx_gv11b.c. commit_inst should be done after global_ctx_buffer map and commit. gv11b_update_subctx_header is called from rm-server for alloc_subctx_header which is using global_ctx_buffer_va[PRIV_ACCESS_MAP_VA]. Jira VQRM-2982 Change-Id: Iff953bf0a12db2c6d69d35094969ab9485858025 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661187 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fault_ch to record_sm_error_stateShashank Singh2018-03-13
| | | | | | | | | | | | | | fault_ch is needed by rm-server to send the notification to guest VM. rm-server is going to use gr sources from linux Jira VQRM-2982 Change-Id: Ifb6e8a9630a471d07b89ffaa7f2ceb309220fd21 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661665 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU nv_pmu_boardobj & queue updateMahantesh Kumbar2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | - Updated "struct nv_pmu_boardobj, nv_pmu_boardobj_query & nv_pmu_boardobjgrp_super" by adding new members as per gv10x PMU ucode boardobj interface. - Created "PMU_QUEUE_COUNT_FOR_V5 4" for gv10x PMU ucode - Created "PMU_QUEUE_MSG_IDX_FOR_V5 3" for gv10x PMU ucode - Deleted unused "PMU_QUEUE_MSG_IDX_FOR_4" - Updating "APP_VERSION_GV10X 23616379" for ucode git CL: https://git-master.nvidia.com/r/#/c/1662993/ P4 CL#: 23647491 - Updating "APP_VERSION_GP10X 22099494" for ucode git CL: https://git-master.nvidia.com/r/#/c/1662995/ P4 CL#: 23647537 Change-Id: I6e8e2b30e81422f8b529a2fad6d926f93bd73d3e Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656643 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: pmu: add dma coherent supportseshendra Gadagottu2018-03-13
| | | | | | | | | | | | | | | | Setup pmu apertures based on dma coherent property. Bug 200394053 Change-Id: I45beff671e4b8741f2b1ffbc811618b074772ea0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641609 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use also normal logging with TRACE_PRINTKKonsta Holtta2018-03-13
| | | | | | | | | | | | | | | When CONFIG_GK20A_TRACE_PRINTK is set to support printing to ftrace log instead of the normal kernel log, but log_trace from debugfs is not set, fall back to normal kernel logging instead of not logging anything. Change-Id: I553baed20a52108229dbcc5c63e8af4e1bcd1b30 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674250 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: depend on TRACING for TRACE_PRINTKKonsta Holtta2018-03-13
| | | | | | | | | | | | | | | | | Modify the GK20A_TRACE_PRINTK config such that it depends on TRACING instead of FTRACE_PRINTK. The latter is not in upstream Linux nor in our downstream 4.9, and this option is default n anyway so this is a pretty safe change. Change-Id: If4ce5a041c8392d0bc54a60730c6ab3115b0062a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674114 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add user API to get a syncpointLakshmanan M2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate support for this new API Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver. Set this flag for VGPU-GV11B Bug 200326065 Jira NVGPU-179 Change-Id: I6c992b13268b688a2bbc93a3331e987ea2f7dd0c Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1670452 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Tested-by: Jitendra Pratap Singh Chauhan <jchauhan@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: decouple sema and hw semaKonsta Holtta2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | struct nvgpu_semaphore represents (mainly) a threshold value that a sema at some index will get and struct nvgpu_semaphore_int (aka "hw_sema") represents the allocation (and write access) of a semaphore index and the next value that the sema at that index can have. The threshold object doesn't need a pointer to the sema allocation that is not even guaranteed to exist for the whole threshold lifetime, so replace the pointer by the position of the sema in the sema pool. This requires some modifications to pass a hw sema around explicitly because it now represents write access more explicitly. Delete also the index field of semaphore_int because it can be directly derived from the offset in the sema location and is thus unnecessary. Jira NVGPU-512 Change-Id: I40be523fd68327e2f9928f10de4f771fe24d49ee Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658102 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: add IPA to PA translationThomas Fleury2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | Add IPA to PA translation for GV100 nvlink / pass-through mode - define platform->phys_addr(g, ipa) method - call nvgpu_init_soc_vars from nvgpu_tegra_pci_probe - in nvgpu_init_soc_vars, define set platform->phys_addr to nvgpu_tegra_hv_ipa_pa, if hypervisor is present. - in __nvgpu_sgl_phys, use sg_phys, then apply platform->phys_addr if defined. - implement IPA to PA translation in nvgpu_tegra_hv_ipa_pa Bug 200392719 Change-Id: I622049ddc62c2a57a665dd259c1bb4ed3843a537 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673582 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add placeholder for IPA to PAThomas Fleury2018-03-13
| | | | | | | | | | | | | | | | Add __nvgpu_sgl_phys function that can be used to implement IPA to PA translation in a subsequent change. Adapt existing function prototypes to add pointer to gpu context, as we will need to check if IPA to PA translation is needed. JIRA EVLR-2442 Bug 200392719 Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673142 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gp10b: enhance priv error reportingSeema Khowala2018-03-13
| | | | | | | | | | | | | | | | | | | | -Append 0x for info dumped in hex format -Dump subid and priv_level for ERROR_INFO -Decode ERROR_CODE for supported error types Bug 2072157 Bug 200392445 Bug 2055510 Bug 200379815 Change-Id: I78df8ca15421ee37631157082648e9b545367c95 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1672292 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gp106: fix freq scale in debugfs nodesThomas Fleury2018-03-12
| | | | | | | | | | | | | | | | | | | | For better precision dramdiv4 (MCLK/4) counter is used to measure MCLK frequency. But the scaling factor of 2 must be taken into account when reporting dramdiv2_rec_clk1. The issue was not affecting other counters which use scale=1. Bug 200386061 Change-Id: Ib3891f3f2dd4206ac36aa3e3290810144f4aa339 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654536 (cherry picked from commit 6a68207c90feab1caee737013ab7cd5bb3863fb6) Reviewed-on: https://git-master.nvidia.com/r/1657209 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hal for syncpt_incr_per_releaseseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | Create hal to indicate syncpt increments per release. Legacy chip uses 2 syncpt increments per release and gv1xx onwards uses 1 syncpt increment per release. Bug 2066025 Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669817 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: set 4byte payload size for semaseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | | | | Default semaphore payload size is 16byte. Set it to 4 byte to avoid double increment of associated sync point with semaphore release. Also removed extra 0 op function from syncpoint increment command. Bug 2066025 Change-Id: Ia282cc5625827d356b5ba963adb7b1b3c703a931 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669714 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add refcounting for ctxsw disable/enableShashank Singh2018-03-12
| | | | | | | | | | | | | | | | | | | | ctxsw disable could be called recursively for RM server. Suspend contexts disables ctxsw at the beginning, then call tsg disable and preempt. If preempt timeout happens, it goes to recovery path, which will try to disable ctxsw again. More details on Bug 200331110. Jira VQRM-2982 Change-Id: I4659c842ae73ed59be51ae65b25366f24abcaf22 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1671716 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: check for syncpt enableseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | | | Check for syncpt enable before querying for synpt ro map. Otherwise it is getting result in kernel crash with syncpt support disabled. Change-Id: Iaa13d802ec66a368f2bedd2dd1061bae29b4aaa2 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1671652 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support per-channel wdt timeoutsKonsta Holtta2018-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the padding in nvgpu_channel_wdt_args with a timeout value in milliseconds, and add NVGPU_IOCTL_CHANNEL_WDT_FLAG_SET_TIMEOUT to signify the existence of this new field. When the new flag is included in the value of wdt_status, the field is used to set a per-channel timeout to override the per-GPU default. Add NVGPU_IOCTL_CHANNEL_WDT_FLAG_DISABLE_DUMP to disable the long debug dump when a timed out channel gets recovered by the watchdog. Printing the dump to serial console takes easily several seconds. (Note that there is NVGPU_TIMEOUT_FLAG_DISABLE_DUMP about ctxsw timeout separately for NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX as well.) The behaviour of NVGPU_IOCTL_CHANNEL_WDT is changed so that either NVGPU_IOCTL_CHANNEL_ENABLE_WDT or NVGPU_IOCTL_CHANNEL_DISABLE_WDT has to be set. The old behaviour was that other values were silently ignored. The usage of the global default debugfs-controlled ch_wdt_timeout_ms is changed so that its value takes effect only for newly opened channels instead of in realtime. Also, zero value no longer means that the watchdog is disabled; there is a separate flag for that after all. gk20a_fifo_recover_tsg used to ignore the value of "verbose" when no engines were found. Correct this. Bug 1982826 Bug 1985845 Jira NVGPU-73 Change-Id: Iea6213a646a66cb7c631ed7d7c91d8c2ba8a92a4 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1510898 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: don't reset semaphores to 0 on initKonsta Holtta2018-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | With proper wrap-handling comparisons now supported, it's safe to not reset a kernel-managed semaphore to 0 when initializing it to be used by some channel; the value can be left unchanged, so that any pending waits on other channels for this sema can't get corrupted anymore. This makes semaphore values very similar to syncpoints, i.e., just monotonically increasing counters. Also clear the semaphore sea to values of 0xfffffff0 when allocating it. This way it takes 16 increments on each sema to wrap over the 32-bit integer range; such wrapping would eventually happen if the memory was initialized to zeros, so this way any bugs possibly caused by wrapping not taken into account would uncover quickly after boot. Jira NVGPU-514 Change-Id: I93f9b1d32d020a4c23824f5856bc463b1895b99d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1652087 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use gv11b_css_hw_set_handled_snapshots for GV11BMartin Radev2018-03-08
| | | | | | | | | | | | | | | | | The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not have correct behavior on GV11B due to that. The patch adds an instance of css_hw_set_handled_snapshots for Volta to fix that. Bug 1960846 Bug 2068936 Change-Id: Ic057338d3b1b951a66d070267e69a90f136598b9 Signed-off-by: Martin Radev <mradev@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1668568 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Correctly plumb -EAGAIN from vidmem allocationsAlex Waterman2018-03-08
| | | | | | | | | | | | | | | | | | Userspace can and should retry vidmem allocations if there are pending clears still to be executed by the GPU. But this requires the -EAGAIN to properly propagate back to userspace. Bug 200378648 Change-Id: Ib930711270439843e043d65c2e87b60612a76239 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669099 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle semaphore wraparoundKonsta Holtta2018-03-08
| | | | | | | | | | | | | | | | | | | Compare gpu semaphores in the kernel in the same way as the hardware does: released if value is over threshold, but at most half of u32's range. This makes it possible to skip zeroing the sema values when semas are allocated, so that they'd be just monotonically increasing numbers like syncpoints are. Jira NVGPU-514 Change-Id: I3bae352fbacfe9690666765b9ecdeae6f0813ea1 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1652086 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: boardobj update for gv10x branchMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created ops for below boardobj methods to support gp10x & gv10x branch boardobj changes, and defined methods for gv10x with postfix _v1 with below names boardobjgrp_pmucmd_construct_impl boardobjgrp_pmuset_impl boardobjgrp_pmugetstatus_impl is_boardobjgrp_pmucmd_id_valid - These ops are assigned based on PMU version to respective chip. - Modified BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT & BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to support gp10x & gv10x branch changes - Updated struct boardobjgrp_pmu_cmd to include members needed for gv10x boardobj changes - Created "struct nv_pmu_rpc_struct_board_obj_grp_cmd" to execute BOARD_OBJ_GRP_CMD using RPC. - Defined method boardobjgrp_pmucmdsend_rpc() to send BOARD_OBJ_GRP_CMD to PMU. Change-Id: If2551bdda80e897e7b21d2966881586f3bbc7a9b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656511 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU super surface supportMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Added ops "pmu.alloc_super_surface" to create memory space for pmu super surface - Defined method nvgpu_pmu_sysmem_surface_alloc() to allocate pmu super surface memory & assigned to "pmu.alloc_super_surface" for gv100 - "pmu.alloc_super_surface" set to NULL for gp106 - Memory space of size "struct nv_pmu_super_surface" is allocated during pmu sw init setup if "pmu.alloc_super_surface" is not NULL & free if error occur. - Added ops "pmu_ver.config_pmu_cmdline_args_super_surface" to describe PMU super surface details to PMU ucode as part of pmu command line args command if "pmu.alloc_super_surface" is not NULL. - Updated pmu_cmdline_args_v6 to include member "struct flcn_mem_desc_v0 super_surface" - Free allocated memory for PMU super surface in nvgpu_remove_pmu_support() method - Added "struct nvgpu_mem super_surface_buf" to "nvgpu_pmu" struct - Created header file "gpmu_super_surf_if.h" to include interface about pmu super surface, added "struct nv_pmu_super_surface" to hold super surface members along with rsvd[x] dummy space to sync members offset with PMU super surface members. Change-Id: I2b28912bf4d86a8cc72884e3b023f21c73fb3503 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656571 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Enable IO coherency on GV100Alex Waterman2018-03-07
| | | | | | | | | | | | | | This reverts commit 848af2ce6de6140323a6ffe3075bf8021e119434. This is a revert of a revert, etc, etc. It re-enables IO coherence again. JIRA EVLR-2333 Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669722 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: use gv11b update_ctxsw_preemption_modeKirill Artamonov2018-03-07
| | | | | | | | | | | | | Use gr_gv11b_update_ctxsw_preemption_mode instead of gr_gp10b_update_ctxsw_preemption_mode. bug 1888344 Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Change-Id: I2420b64bac6393b1bb59a84235850a6cbc68be93 Reviewed-on: https://git-master.nvidia.com/r/1665663 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: BUG_ON for sema increment, not valueKonsta Holtta2018-03-07
| | | | | | | | | | | | | | | | When adding a sema wait to a pushbuf, verify that the sema threshold has been incremented from the original value by reading the incremented field instead of value (which is set to nonzero by nvgpu_semaphore_incr()). Value could be 0 even after an increment if new semas weren't reset to 0. Jira NVGPU-514 Change-Id: I295451fbc7eb9e597aea12d73074e99f74a6a899 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658100 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>