summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/gr_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast listDeepak Nibade2018-04-16
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs and just calculate the unicast list assuming all FBPAs are present This generates incorrect list of unicast addresses Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr Set gr_gv100_get_active_fpba_mask() for GV100 Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate unicast list only for active FBPAs Bug 200398811 Jira NVGPU-556 Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694444 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix gpc/tpc index for SMPC broadcast conversionDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | In gv11b_gr_egpc_etpc_priv_addr_table(), we call gv11b_gr_update_priv_addr_table_smpc() to convert SMPC broadcast address into list of unicast addresses But before calling gv11b_gr_update_priv_addr_table_smpc() we sometimes incorrectly set gpc_num/tpc_num to zero and that leads to generating incorrect list of unicast addresses Remove this incorrect initialization of gpc_num/tpc_num Also update gv11b_gr_egpc_etpc_priv_addr_table() to receive tpc_num along with gpc_num Bug 2099717 Jira NVGPU-580 Change-Id: Idd4e5f78dbe6ca1800efae93c66355d06417d1f2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691373 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL for chiplet offsetDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and NV_PMM_FBP_STRIDE which are incorrect for Volta Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta Use HAL instead of hard coded values wherever required Bug 200398811 Jira NVGPU-556 Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690028 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support to get unicast addresses on voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We have new broadcast registers on Volta, and we need to generate correct unicast addresses for them so that we can write those registers to context image Add new GR HAL create_priv_addr_table() to do this conversion Set gr_gk20a_create_priv_addr_table() for older chips Set gr_gv11b_create_priv_addr_table() for Volta gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate appriate list of unicast register for each broadcast register Bug 200398811 Jira NVGPU-556 Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690027 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add broadcast address decode support for voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Volta we have more number of broadcast registers than previous chips and we don't decode them right now in gr_gk20a_decode_priv_addr() Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all previous chips Add and use gr_gv11b_decode_priv_addr() for Volta gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set the broadcast flags apporiately Define below new broadcast types PRI_BROADCAST_FLAGS_PMMGPC PRI_BROADCAST_FLAGS_PMM_GPCS PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB PRI_BROADCAST_FLAGS_PMMFBP PRI_BROADCAST_FLAGS_PMM_FBPS PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP Bug 200398811 Jira NVGPU-556 Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690026 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gk20a: Use ENOSPC instead of ENOMEMmpurohit2018-04-06
| | | | | | | | | | | | | | | | | | | | | | Modify gr_gk20a_add_zbc() to return -ENOSPC instead of -ENOMEM when all slots are already used. ENOMEM is more meant for memory allocation failures, anyway. This is required for porting nvgpu_gpu_zbc test case using libnvrm_gpu APIs. Bug 1967537 JIRA VQRM-3348 Change-Id: Ib7bcb6ba94d2ca168bcad517a8a7260fbf278a91 Signed-off-by: Mahesh Purohit <mpurohit@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1688302 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle misaligned_addr SM exceptionDeepak Nibade2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | We right now do not handle misaligned_addr SM exception explicitly and hence we incorrectly initiate CILP on this exception Handle this exception explicitly in this sequence - - set error notifier first - clear the interrupt - return error from gr_gv11b_handle_warp_esr_error_misaligned_addr() so that RC recovery is triggered by gk20a_gr_isr() Ensure that the error value is propagated back to gk20a_gr_isr() correctly Use nvgpu_set_error_notifier_if_empty() to set error notifier since this will prevent overwriting of error notifier value in case gk20a_gr_isr() also tries to write to some error notifier value Bug 200388475 Jira NVGPU-554 Change-Id: I84c4d202a8068e738567ccd344e05d9d5f6ad2f0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1686781 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize ctxsw state for golden context creationseshendra Gadagottu2018-04-03
| | | | | | | | | | | | | | | | | | | | If golden context creation happens before any gpu railgate then channel creation is always fine. If gpu railgate happens after gpu finalize poweon, but before golden context creation, then golden context creation is failing during first channel creation with watchdog timeout from ctxsw because of invalid ctxsw state. To Fix this issue, if the golden context is not created, then during finalize power on always query ctxsw image sizes, which is making ctxsw hw in correct state before golden context creation. Bug 2051863 Change-Id: I81d221100a099b12bad3adc2d252de4621c335a5 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682265 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix address table for GPCS_TPC6 broadcast conversionDeepak Nibade2018-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | In gr_gk20a_create_priv_addr_table() and gv11b_gr_egpc_etpc_priv_addr_table(), we create a table of unicast addresses from broadcast addresses For GPC boardcast addresses like NV_PGRAPH_PRI_EGPCS_ETPC6_SM_*, we generate the table assuming there are 7 TPCs in all the GPCs But this is incorrect in some cases like GV100 where GPC0/1 have only 6 TPCs And hence we end up generating registers which do not exist Fix this by explicitly checking the number of TPCs and ensuring that address generated is belongs to valid TPC Bug 200400376 Jira NVGPU-564 Change-Id: I65d7d6cd7f0bf16171eb54ed71f1f3840ade3495 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1686806 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.set_error_notifierRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server overrides it for handling stall interrupts. Jira VQRM-3058 Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use gpc_tpc_count[gpc] for number of tpc in a gpcSeema Khowala2018-03-29
| | | | | | | | | | | | | | | | | | | | | Using tpc_count instead of gpc_tpc_count indexed by gpc, will result in pbus error with decode error or client floorswept error codes. tpc_count represents total number of tpc while gpc_tpc_count[gpc] represents number of tpc in the indexed gpc. Bug 1998067 Change-Id: I9adfb98a6c3e209cbb02a8cd5090f6b6adc1ec4b Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682469 Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix PMA list alignment in ctxsw bufferDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in ctxsw buffer to be 256 byte aligned but same change is not applied to other chip ucodes ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other chips except GV100 Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix the required alignment in this function Bug 1998067 Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676655 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entriesDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LIST_nv_pm_fbpa_ctx_regs, we right now call add_ctxsw_buffer_map_entries_subunits() to add registers corresponding to all the FBPAs But while configuring total number of registers, we do not consider floorswept FBPAs and that causes misalignment in subsequent lists for GV100 Fix this by reading disabled/floorswept FBPAs from fuse and consider only those FBPAs which are active for GV100 Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100 Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned implementation to consider floorsweeping Bug 1998067 Change-Id: Id560551bb0b8142791c117b6d27864566c90b489 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676654 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gpu_va to update_hwpm_ctxsw_mode parameters()Aparna Das2018-03-16
| | | | | | | | | | | | | | | | | It'll allow the function to use fixed mapping. Jira VQRM-2982 Change-Id: I98159c5b199ce1854b1b40704392237cadb71ef2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fault_ch to record_sm_error_stateShashank Singh2018-03-13
| | | | | | | | | | | | | | fault_ch is needed by rm-server to send the notification to guest VM. rm-server is going to use gr sources from linux Jira VQRM-2982 Change-Id: Ifb6e8a9630a471d07b89ffaa7f2ceb309220fd21 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661665 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add refcounting for ctxsw disable/enableShashank Singh2018-03-12
| | | | | | | | | | | | | | | | | | | | ctxsw disable could be called recursively for RM server. Suspend contexts disables ctxsw at the beginning, then call tsg disable and preempt. If preempt timeout happens, it goes to recovery path, which will try to disable ctxsw again. More details on Bug 200331110. Jira VQRM-2982 Change-Id: I4659c842ae73ed59be51ae65b25366f24abcaf22 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1671716 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Enable IO coherency on GV100Alex Waterman2018-03-07
| | | | | | | | | | | | | | This reverts commit 848af2ce6de6140323a6ffe3075bf8021e119434. This is a revert of a revert, etc, etc. It re-enables IO coherence again. JIRA EVLR-2333 Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669722 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle semaphore pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle semaphore pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I5fee5a491c6e54344f9da477eaf5881c50335bbc Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658298 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add debugger.post_events HAL opAparna Das2018-03-06
| | | | | | | | | | | | RM Server will need to set specific HAL op and notify vgpu client. Jira VQRM-2982 Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make gr functions that are used by vsrv globalRichard Zhao2018-03-06
| | | | | | | | | | | | Fixed vsrv link errors for gr unification. Jira VQRM-2982 Change-Id: Icd46792191f1a9aaefbf86d2f3c0b4d5bce2384e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle post event idAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr post event id needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658382 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle notify pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle notify pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I4cb88d4d769a5d5cb98a4ee6ac3fbb74245cb5f2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658255 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op for gr set error notifierAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr set error notifier needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: Ia445a27112bb6c5587dbb81100a9dafe5875b338 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657830 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: modify logic for fecs arb idle checkseshendra Gadagottu2018-03-06
| | | | | | | | | | | | | | | | | Check these conditions for fecs arbiter idle: 1. Wait for gr_fecs_arb_ctx_cmd to idle 2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle Just waiting for condition 1, only guarantees dispatching of command but not completion of work. Bug 2056266 Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667550 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clean up memory leaks in gr initSeema Khowala2018-03-06
| | | | | | | | | | | | | | | | | | | | | | This is to resolve memory leak after modifying tpc_fs_mask sysfs which sets gr.sw_ready to false and forces gr to re-initialize. echo 1 > /sys/devices/gpu.0/force_idle echo 5 > /sys/devices/gpu.0/tpc_fs_mask echo 0 > /sys/devices/gpu.0/force_idle Bug 200393029 Change-Id: I76299f53fc87823071c672ec682c3eb51f72f513 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1666018 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""Timo Alho2018-03-05
| | | | | | | | | | | This reverts commit 89fbf39a05483917c0a9f3453fd94c724bc37375. Bug 2075315 Change-Id: Id34a0376be5160b164931926ec600f77edf69667 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1668487 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
* Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""Alex Waterman2018-03-03
| | | | | | | | | | | | | | | This reverts commit 5a35a95654d561fce09a3b9abf6b82bb7a29d74b. JIRA EVLR-2333 Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667167 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: do not alloc ctx buffers if already allocatedSeema Khowala2018-03-01
| | | | | | | | | | | | | | Bug 200393029 Change-Id: Ic2946958e34bcb9247179fcf2e8735c822155cce Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665338 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com> Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"Alex Waterman2018-02-28
| | | | | | | | | | | | | | Also revert other changes related to IO coherence. This may be the culprit in a recent dev-kernel lockdown. Bug 2070609 Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665914 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
* gpu: nvgpu: Use coherent aperture flagAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a coherent DMA API wee must make sure to program any aperture fields with the coherent aperture setting. To do this the nvgpu_aperture_mask() function was modified to take a third aperture mask argument, a coherent setting, so that code can use this function to generate coherent aperture settings. The aperture choice is some what tricky: the default version of this function uses the state of the DMA API to determine what aperture to use for SYSMEM: either coherent or non-coherent internally. Thus a kernel user need only specify the normal nvgpu_mem struct and the correct mask should be chosen. Due to many uses of nvgpu_mem structs not created directly from the DMA API wrapper it's easier to translate SYSMEM to SYSMEM_COH after creation. However, the GMMU mapping code, will encounter buffers from userspace with difference coerency attributes than the DMA API. Thus the __nvgpu_aperture_mask() really respects the aperture setting passed in regardless of the DMA API state. This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT since this is either passed in from userspace or set by the kernel when using coherent DMA. The aperture field in attrs is upgraded to coh if this flag is set. This change also adds a coherent sysmem mask everywhere that it can. There's a couple places that do not have a coherent register field defined yet. These need to eventually be defined and added. Lastly the aperture mask code has been mvoed from the Linux vm.c code to the general vm.c code since this function has no Linux dependencies. Note: depends on https://git-master.nvidia.com/r/1664536 for new register fields. JIRA EVLR-2333 Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655220 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: FIFO sched fixesAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | Miscellaneous fixes for the sched code: 1. Make sure get_addr() on an SGL respects the use phys flag since the runlist needs physical addresses when NVLINK is in use. 2. Ensure the runlist is contiguous. Since the runlist memory is not virtually addressed the buffer must be physically contiguous. 3. Use all 64 bits of the runlist address in the runlist base addr register (and related fields). JIRA EVLR-2333 Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654667 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup unused variablesAlex Waterman2018-02-24
| | | | | | | | | | | | | | | | There are numerous places where variables are assigned to but then never used. This patch cleans up all these unused variables and in some cases simplifies surrounding logic. Also delete unused header includes and add necessary header includes. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: Ice9ec2a0e97f262d0dcfebe22f83208dbea569d9 Reviewed-on: https://git-master.nvidia.com/r/1662548 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Don't alloc comptags on GV100Alex Waterman2018-02-23
| | | | | | | | | | | We don't have compression enabled on GV100 so it does not make sense to allocate a huge CBC for this device. Change-Id: I0ff908571f28c2ba6f439b0989398bf68dce16f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655279 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use preallocated VPR bufferTerje Bergstrom2018-02-15
| | | | | | | | | | | | To prevent deadlock while allocating VPR in nvgpu, allocate all the needed VPR memory at probe time and use an internal allocator to hand out space for VPR buffers. Change-Id: I584b9a0f746d5d1dec021cdfbd6f26b4b92e4412 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655324 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable more gr exceptionsSeema Khowala2018-01-31
| | | | | | | | | | | | | | | | -pd, scc, ds, ssync, mme and sked exceptions are enabled. This will be useful for debugging -Handle enabled interrupts -Add gr ops to handle ssync hww. For legacy chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr. Since ssync hww is not enabled on legacy chips, added ssync hww exception handling for volta only. Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644751 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fecs_host_int_enable halSeema Khowala2018-01-31
| | | | | | | | | | | This will be used to enable fecs interrupts per chip. Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642554 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add speculative load barrier (ctrl IOCTLs)Alex Waterman2018-01-25
| | | | | | | | | | | | | | | | | Data can be speculatively loaded from memory and stay in cache even when bound check fails. This can lead to unintended information disclosure via side-channel analysis. To mitigate this problem insert a speculation barrier. bug 2039126 CVE-2017-5753 Change-Id: Ib6c4b2f99b85af3119cce3882fe35ab47509c76f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640500 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Report mailbox id and value on ucode timeoutTerje Bergstrom2018-01-23
| | | | | | | | | | | | | | | | | | | | When we detect a timeout waiting for ctxsw ucode method to complete, we print an error. The error does not detail the event we are waiting which makes debugging difficult. Add the missing mailbox id and value to the error. Bug 2049965 Change-Id: I45204a2d6f1f39919a0133b1e0867213e1a5b671 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643709 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make graphics context property of TSGTerje Bergstrom2018-01-17
| | | | | | | | | | | | | | | | | | | | Move graphics context ownership to TSG instead of channel. Combine channel_ctx_gk20a and gr_ctx_desc to one structure, because the split between them was arbitrary. Move context header to be property of channel. Bug 1842197 Change-Id: I410e3262f80b318d8528bcbec270b63a2d8d2ff9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639532 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Combine gk20a and gp10b free_gr_ctxTerje Bergstrom2018-01-12
| | | | | | | | | | | | | | gp10b version of free_gr_ctx was created to keep gp10b source code changes out from the mainline. gp10b was merged back to mainline a while ago, so this separation is no longer needed. Merge the two variants. Change-Id: I954b3b677e98e4248f95641ea22e0def4e583c66 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1635127 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: get virtual SMs mappingThomas Fleury2018-01-10
| | | | | | | | | | | | | | | On gv11b we can have multiple SMs per TPC. Add sm_per_tpc in vgpu constants to properly dimension the virtual SM to TPC/GPC mapping in virtualization case. Use TEGRA_VGPU_CMD_GET_SMS_MAPPING to query current mapping. Bug 2039676 Change-Id: I817be18f9a28cfb9bd8af207d7d6341a2ec3994b Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631203 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Do not disable ELPG when committing buffersTerje Bergstrom2018-01-04
| | | | | | | | | | | | | Committing buffer addresses only writes to the memory. There's no need to disable ELPG for the duration, so drop the ELPG protection. Change-Id: I8d8d08506387197e4737e0311df4a20085496056 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631149 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove gk20a specific optimizationTerje Bergstrom2018-01-04
| | | | | | | | | | | Remove compute optimization specific to gk20a. We do not support gk20a anymore. Change-Id: Ibd548eee8d891a667f28a451d586fcfaac7f026a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631144 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for channel eventsTerje Bergstrom2017-12-28
| | | | | | | | | | | | | | Remove support for events for bare channels. All users have already moved to TSGs and TSG events. Bug 1842197 Change-Id: Ib3ff68134ad9515ee761d0f0e19a3150a0b744ab Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1618906 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: implement ecc scrubberDeepak Goyal2017-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | Check the availability of ecc units by checking relevant ecc fuse and fuse overrides. During gpu boot, initialize ecc units by scrubbing individual ecc units available. ECC initialization should be done before gr initialization. Following ecc units are scrubbed: SM LRF SM L1 DATA SM L1 TAG SM CBU SM ICACHE Bug 200339497 Change-Id: I54bf8cc1fce639a9993bf80984dafc28dca0dba3 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612734 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu API to check for allocated memoryDeepak Nibade2017-12-08
| | | | | | | | | | | | | | | | | | | In __gr_gk20a_exec_ctx_ops(), we directly access linux specific pages to check if memory is allocated or not Since we need to remove this linux specific dependency from common code, use common API nvgpu_mem_is_valid() to check if memory is allocated or not Jira NVGPU-448 Change-Id: Iad62482ad1c0dfad3b96c6c125c2641bbe6ea596 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612445 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix unsigned int declarationDeepak Nibade2017-12-08
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gk20a_init_access_map(), we declare num_entries as "unsigned int" But this variable is implicitly type casted into "int" while calling subsequent functions Hence explicitly declare it as type "int" Also declare variable "w" as "int" too since we use it to compare against num_entries Jira NVGPU-446 Change-Id: I289da6951db0a9ed6b8d6bcb3ee4f6071a4ddaf0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612444 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list APIs instead of linux APIsDeepak Nibade2017-12-08
| | | | | | | | | | | | | | | | | Use nvgpu specific list APIs nvgpu_list_for_each_entry() instead of calling Linux specific list APIs list_for_each_entry() Jira NVGPU-444 Change-Id: I3c1fd495ed9e8bebab1f23b6769944373b46059b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612442 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove linux specific include from gr_*.c filesDeepak Nibade2017-11-30
| | | | | | | | | | | | | | | Remove linux specific #include "common/linux/os_linux.h" from common source files gr_gk20a.c/gr_gm20b.c/gr_gp10b.c Remove use of ZERO_OR_NULL_PTR() and simply check if pointer is NULL or not Jira NVGPU-405 Change-Id: I663fe298cc720f0b0e22beaa05697b18b375a204 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1607233 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: gfxp wfi timeoutseshendra Gadagottu2017-11-29
| | | | | | | | | | | | | | | | | | | | | | For gv11b, configured gfx preemption wfi timeout in usec. Set timeout unit as usec in gr_gv11b_init_preemption_state. Used default timeout as 1msec and this timeout value can be modified through sysfs node: /sys/devices/gpu.0/gfxp_wfi_timeout_count For gp10b: gfxp_wfi_timeout_count is in syclk cycles For gv11b: gfxp_wfi_timeout_count is in usec Bug 2003668 Change-Id: I68d52ce996a83df90b8b3a8164debb07e5cb370f Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599658 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>