summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gm20b/hal_gm20b.c
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: add HAL to insert semaphore commandsDeepak Nibade2018-05-16
| | | | | | | | | | | | | | | | | | | | | Add below new HALs gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/ release sequence and set it to gops.fifo.add_sema_cmd() Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size() to return respective command buffer sizes Jira NVGPUT-16 Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1704490 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to handle nonstall interruptsDeepak Nibade2018-05-07
| | | | | | | | | | | | | | | | | | | | | Add new HAL gops.mc.isr_nonstall() to handle nonstall interrupts We already handle nonstall interrupts in nvgpu_intr_nonstall() But this API is completely in linux specific code Separate out os-independent code to handle nonstall interrupts in new API mc_gk20a_isr_nonstall() and set it to HAL gops.mc.isr_nonstall() for all existing chips Call this HAL from nvgpu_intr_nonstall() Jira NVGPUT-8 Change-Id: Iec6a56db03158a72a256f7eee8989a0a8a42ae2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1706589 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HALs to mmu fault descriptors.Vinod G2018-05-04
| | | | | | | | | | | | | | | mmu fault information for client and gpc differ on various chip. Add separate table for each chip based on that change and add hal functions to access those descriptors. bug 2050564 Change-Id: If15a4757762569d60d4ce1a6a47b8c9a93c11cb0 Signed-off-by: Vinod G <vinodg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1704105 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gr hal for fecs_ctxsw_mailbox sizeSeema Khowala2018-05-01
| | | | | | | | | | | | | | fecs_ctxsw_mailbox_size varies per chip. Use hal to get the size. Also dump fecs_ctxsw_status_1 to help debug Bug 2093809 Change-Id: I5a50281e9d78fe0e4a75d03971169e3e9679967a Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1698026 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HALs to submit and wait for runlistDeepak Nibade2018-04-24
| | | | | | | | | | | | | | | | | | | | | | | | Add below two new HALs gops.fifo.runlist_hw_submit() to submit a new runlist to hardware gops.fifo.runlist_wait_pending() to wait until runlist write is successful Set existing API gk20a_fifo_runlist_wait_pending() to gops.fifo.runlist_wait_pending HAL Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w and set it to gops.fifo.runlist_hw_submit HAL Jira NVGPUT-20 Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1700548 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to set ppriv timeoutsDeepak Nibade2018-04-22
| | | | | | | | | | | | | | | | | | Add new HAL gops.bus.set_ppriv_timeout_settings() to set platform specific ppriv timeouts Set this HAL for all supported GPUs for now Jira NVGPUT-35 Change-Id: I88b438a7bf381d0216e0947a16cd267461d0e8d7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1699314 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: add hal op for gr set error notifier"Richard Zhao2018-04-19
| | | | | | | | | | | | | | | This reverts commit d6c6c6c483478654b34685b9e13ed160bad49a1c. RM server has moved to gops.fifo.set_error_notifier. gops.gr.set_error_notifier is not needed anymore. Jira VQRM-3058 Change-Id: I0fe7f914778ce66701a699aece2b36a5cd8079da Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast listDeepak Nibade2018-04-16
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs and just calculate the unicast list assuming all FBPAs are present This generates incorrect list of unicast addresses Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr Set gr_gv100_get_active_fpba_mask() for GV100 Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate unicast list only for active FBPAs Bug 200398811 Jira NVGPU-556 Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694444 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL for chiplet offsetDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and NV_PMM_FBP_STRIDE which are incorrect for Volta Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta Use HAL instead of hard coded values wherever required Bug 200398811 Jira NVGPU-556 Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690028 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support to get unicast addresses on voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We have new broadcast registers on Volta, and we need to generate correct unicast addresses for them so that we can write those registers to context image Add new GR HAL create_priv_addr_table() to do this conversion Set gr_gk20a_create_priv_addr_table() for older chips Set gr_gv11b_create_priv_addr_table() for Volta gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate appriate list of unicast register for each broadcast register Bug 200398811 Jira NVGPU-556 Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690027 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add broadcast address decode support for voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Volta we have more number of broadcast registers than previous chips and we don't decode them right now in gr_gk20a_decode_priv_addr() Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all previous chips Add and use gr_gv11b_decode_priv_addr() for Volta gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set the broadcast flags apporiately Define below new broadcast types PRI_BROADCAST_FLAGS_PMMGPC PRI_BROADCAST_FLAGS_PMM_GPCS PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB PRI_BROADCAST_FLAGS_PMMFBP PRI_BROADCAST_FLAGS_PMM_FBPS PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP Bug 200398811 Jira NVGPU-556 Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690026 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.setup_swSourab Gupta2018-03-29
| | | | | | | | | | | | | bar1/userd setup is different for RM server. created common function gk20a_init_fifo_setup_sw_common. Jira VQRM-3058 Change-Id: I655b54e21ed5f15dcb8e7b01bd9cd129b35ae7a3 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665691 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.set_error_notifierRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server overrides it for handling stall interrupts. Jira VQRM-3058 Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.channel_suspend/channel_resumeRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server acts differently for channel suspend/resume. Jira VQRM-3058 Change-Id: If41e3099164654db448d1157fd7f51dd00c5e201 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.check_tsg_ctxsw_timeout/check_ch_ctxsw_timeoutRichard Zhao2018-03-29
| | | | | | | | | | | | | RM Server acts differently for ctxsw timeout check. It won't check GP_GET or accumulated timeouts, but notify guest and go to recovery. Jira VQRM-3058 Change-Id: I428aea34dc517311eb7e73feb556145e916309fb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.ch_abort_clean_upRichard Zhao2018-03-29
| | | | | | | | | | | | | | Channel abort clean up is only needed by native and vgpu driver but not RM server. RM server expects guest will clean up itself. RM server should not set the callback. Jira VQRM-3058 Change-Id: I11b49b6f2d51c871e31de16955d487dca82609cb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix priv error register readsThomas Fleury2018-03-28
| | | | | | | | | | | | | | | | | | | | | | | | Current code does not compute priv error register offsets properly. This leads to invalid decoding of priv errors, and can also trigger additional priv errors. - add GPU_LIT_GPC_PRIV_STRIDE define - return proj_gpc_priv_stride for GPU_LIT_GPC_PRIV_STRIDE in hals - use GPU_LIT_GPC_PRIV_STRIDE instead of GPU_LIT_GPC_STRIDE in g->ops.priv_ring.isr() to compute priv error register offsets. Bug 2093058 Change-Id: Ia7c36ccba0441126784bb0e00452f2cf1196ef71 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682118 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Seema Khowala <seemaj@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix PMA list alignment in ctxsw bufferDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in ctxsw buffer to be 256 byte aligned but same change is not applied to other chip ucodes ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other chips except GV100 Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix the required alignment in this function Bug 1998067 Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676655 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entriesDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LIST_nv_pm_fbpa_ctx_regs, we right now call add_ctxsw_buffer_map_entries_subunits() to add registers corresponding to all the FBPAs But while configuring total number of registers, we do not consider floorswept FBPAs and that causes misalignment in subsequent lists for GV100 Fix this by reading disabled/floorswept FBPAs from fuse and consider only those FBPAs which are active for GV100 Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100 Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned implementation to consider floorsweeping Bug 1998067 Change-Id: Id560551bb0b8142791c117b6d27864566c90b489 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676654 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hal for syncpt_incr_per_releaseseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | Create hal to indicate syncpt increments per release. Legacy chip uses 2 syncpt increments per release and gv1xx onwards uses 1 syncpt increment per release. Bug 2066025 Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669817 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle semaphore pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle semaphore pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I5fee5a491c6e54344f9da477eaf5881c50335bbc Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658298 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.semaphore_wakeup HALRichard Zhao2018-03-06
| | | | | | | | | | | | | | | | | vserver handles semaphore differently from native, so it needs a callback to differentiate from native. Also created common function mc_gk20a_handle_intr_nonstall to handle all nonstall interrupts. Jira VQRM-2982 Change-Id: I1b3821717a4005ca4bf2a4dac5dcd335872f48f1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656753 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add debugger.post_events HAL opAparna Das2018-03-06
| | | | | | | | | | | | RM Server will need to set specific HAL op and notify vgpu client. Jira VQRM-2982 Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle post event idAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr post event id needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658382 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle notify pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle notify pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I4cb88d4d769a5d5cb98a4ee6ac3fbb74245cb5f2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658255 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op for gr set error notifierAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr set error notifier needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: Ia445a27112bb6c5587dbb81100a9dafe5875b338 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657830 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Use gv11b_css_hw_set_handled_snapshots for GV11B"Srikar Srimath Tirumala2018-02-26
| | | | | | | | | | | | | This reverts commit 2f2e51bbae39009d0305f6aaf01596571a8f5d5c. Bug 2068936 Change-Id: I539cdc12a3bd0d9d7fe0ce7dbe9cb7a274eeaa57 Signed-off-by: Srikar Srimath Tirumala <srikars@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664647 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use gv11b_css_hw_set_handled_snapshots for GV11BMartin Radev2018-02-26
| | | | | | | | | | | | | | | | | | | The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not have correct behavior on GV11B due to that. The patch adds an instance of css_hw_set_handled_snapshots for Volta to fix that. The patch also renames css_hw_set_handled_snapshots to gk20a_css_hw_set_handled_snapshots to make it more clear that the function is arch dependent. Bug 1960846 Change-Id: I92c35a862ecd7f918dd1458c086fc7ae42ca8fc5 Signed-off-by: Martin Radev <mradev@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662427 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fecs_host_int_enable halSeema Khowala2018-01-31
| | | | | | | | | | | This will be used to enable fecs interrupts per chip. Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642554 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make .tsg_unbind_channel one layer lowerRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | The message to tell RM server to unbind channel has to be sent after client unbinds the channel and before client calls tsg release. The channel has to belong to a tsg on RM server before client submit a runlist to remove the channel. Or there's a bare channel problem. By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel() will be common functions for all chip, and it'll call tsg release after call .tsg_unbind_channel. So vgpu won't need to worry about tsg was released before sending msg to RM server. Bug 200382695 Bug 200382785 Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645383 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Enable perfmon.Deepak Goyal2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | t19x PMU ucode uses RPC mechanism for PERFMON commands. - Declared "pmu_init_perfmon", "pmu_perfmon_start_sampling", "pmu_perfmon_stop_sampling" and "pmu_perfmon_get_samples" in pmu ops to differenciate for chips using RPC & legacy cmd/msg mechanism. - Defined and used PERFMON RPC commands for t19x - INIT - START - STOP - QUERY - Adds RPC handler for PERFMON RPC commands. - For guerying GPU utilization/load, we need to send PERFMON_QUERY RPC command for gv11b. - Enables perfmon for gv11b. Bug 2039013 Change-Id: Ic32326f81d48f11bc772afb8fee2dee6e427a699 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614114 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make graphics context property of TSGTerje Bergstrom2018-01-17
| | | | | | | | | | | | | | | | | | | | Move graphics context ownership to TSG instead of channel. Combine channel_ctx_gk20a and gr_ctx_desc to one structure, because the split between them was arbitrary. Move context header to be property of channel. Bug 1842197 Change-Id: I410e3262f80b318d8528bcbec270b63a2d8d2ff9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639532 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove bare channel schedulingTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | Remove scheduling IOCTL implementations for bare channels. Also removes code that constructs bare channels in runlist. Bug 1842197 Change-Id: I6e833b38e24a2f2c45c7993edf939d365eaf41f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627326 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: PMU parity HWW ECC supportDavid Nieto2017-12-11
| | | | | | | | | | | | | | | Adding support for ISR handling of ECC parity errors for PMU unit and setting the initial IRQDST mask to deliver ECC interrupts to host in the non-stall PMU irq path JIRA: GPUT19X-83 Change-Id: I8efae6777811893ecce79d0e32ba81b62c27b1ef Signed-off-by: David Nieto <dmartineznie@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1611625 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Alignment check for compressible fixed-address mappingsSami Kiminki2017-11-30
| | | | | | | | | | | | | | | | | Add an alignment check for compressible-kind fixed-address mappings. If we're using page size smaller than the comptag line coverage window, the GPU VA and the physical buffer offset must be aligned in respect to that window. Bug 1995897 Bug 2011640 Bug 2011668 Change-Id: If68043ee2828d54b9398d77553d10d35cc319236 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1606439 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: split init_falcon_setup_hwSupriya2017-11-27
| | | | | | | | | | | | | | | | | | | This CL is as part of phased changes to support NO LSPMU Changes done are to add new pmu ops : - setup_apertures - update_lspmu_cmdline_args These would be called from pmu op init_falcon_setup_hw JIRA NVGPU-296 Change-Id: Idbcec5c93ca3150df5c9fb81d65b9fce778cecb8 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589004 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add check_priv_security fuse opsSeema Khowala2017-11-22
| | | | | | | | | | | | | | | | | | | | | | | -New fuse ops is added to set NVGPU_SEC_PRIVSECURITY and NVGPU_SEC_SECUREGPCCS bits in g->enabled_flags during hal initialization -For igpu non simulation platforms, fuses are read to decide if gpu should be allowed to boot or not. --Do not boot gpu if priv_sec_en is set but wpr_enabled is not set to 1 or vpr_auto_fetch_disable is not set to 0 --With priv_sec_en set, all falcons have to boot in LS mode and this needs wpr_enabled set to 1 AND vpr_auto_fetch_disable set to 0. In this case gmmu tries to pull wpr and vpr settings from tegra mc Bug 2018223 Change-Id: Iceaa1b0b3214e9a3d6cef5d77a82e034302f748b Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595454 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: CONFIG_TEGRA_ACR is supported by defaultSeema Khowala2017-11-22
| | | | | | | | | | | | TEGRA_ACR config is supposed to be enabled maxwell onwards. Since gk20a support is no longer supported, delete code that is not under TEGRA_ACR config Change-Id: Id52485680bca1ceaadcb94f9603c0898c2002e02 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595437 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLsDeepak Nibade2017-11-15
| | | | | | | | | | | | | | | TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using combination of timeslice and interleave levels to decide the priority Hence remove the IOCTLs and all corresponding APIs Jira NVGPU-393 Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Return GPU classes in get_litter_valueTerje Bergstrom2017-11-09
| | | | | | | | | | | | | Return GPU classes in HAL get_litter_value() instead of assigning them to GPU characteristics at HAL initialization time. JIRA NVGPU-259 Change-Id: Ife7a5cb38df3d33ce98a1caa43d3873fb1431234 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593683 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move fuse override DT handlingTerje Bergstrom2017-11-09
| | | | | | | | | | | | | | | | | | | | | | Move fuse override DT handling to Linux code. All the chip specific fuse override functions did the same thing, so delete the HAL and call the same function to read the DT overrides on all chips. Also remove the fuse override functionality from dGPU. There are no DT entries for PCIe devices, so it would've failed anyway. JIRA NVGPU-259 Change-Id: Iba64a5d53bf4eb94198c0408a462620efc2ddde4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593687 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: halify size of patch bufferDavid Nieto2017-10-26
| | | | | | | | | | | | | Allow per chip calculation of gr patch buffer size and set default to match hw default of 512 data-address pair entries (4K) bug 200350539 Change-Id: I6010c9e0304332825cb02612d3f10523ef27d128 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584033 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: memory unlock HAL supportMahantesh Kumbar2017-10-21
| | | | | | | | | | | | | | | | - Created "mem_unlock" HAL under fb to support memory unlock - Called as part of gk20a_finalize_poweron() if memory unlock support needed by checking HAL - Assigned "mem_unlock" HAL to NULL for chips which don't need memory unlocks. Change-Id: I68d0910f15d293feaacfcbf6bd17ecccd3b5219d Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> (cherry picked from commit 586894eb84860bbbe4c75dae4715bdf27432a480) Reviewed-on: https://git-master.nvidia.com/r/1564703 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move rest of CDE structures to LinuxTerje Bergstrom2017-10-17
| | | | | | | | | | | | | | Move rest of CDE structures to common/linux. This includes moving the per-chip firmware file interpretation functions, and removing CDE ops from HAL and adding it to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I59d8f44bddadecef81ad3c455b363a14034c5e13 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570403 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: flatten out vgpu halPeter Daifuku2017-10-13
| | | | | | | | | | | | | | | Instead of calling the native HAL init function then adding multiple layers of modification for VGPU, flatten out the sequence so that all entry points are set statically and visible in a single file. JIRA ESRM-30 Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574616 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: verify channel status while closing per-platformDeepak Nibade2017-10-04
| | | | | | | | | | | | | | | | We right now call gk20a_fifo_tsg_unbind_channel_verify_status() to verify channel status while unbinding a channel from TSG while closing Add support to do this verification per-platform and keep this disabled for vgpu platforms Bug 200327095 Change-Id: I19fab41c74d10d528d22bd9b3982a4ed73c3b4ca Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572368 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>