summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Correctly plumb -EAGAIN from vidmem allocationsAlex Waterman2018-03-08
| | | | | | | | | | | | | | | | | | Userspace can and should retry vidmem allocations if there are pending clears still to be executed by the GPU. But this requires the -EAGAIN to properly propagate back to userspace. Bug 200378648 Change-Id: Ib930711270439843e043d65c2e87b60612a76239 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669099 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle semaphore wraparoundKonsta Holtta2018-03-08
| | | | | | | | | | | | | | | | | | | Compare gpu semaphores in the kernel in the same way as the hardware does: released if value is over threshold, but at most half of u32's range. This makes it possible to skip zeroing the sema values when semas are allocated, so that they'd be just monotonically increasing numbers like syncpoints are. Jira NVGPU-514 Change-Id: I3bae352fbacfe9690666765b9ecdeae6f0813ea1 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1652086 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: boardobj update for gv10x branchMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created ops for below boardobj methods to support gp10x & gv10x branch boardobj changes, and defined methods for gv10x with postfix _v1 with below names boardobjgrp_pmucmd_construct_impl boardobjgrp_pmuset_impl boardobjgrp_pmugetstatus_impl is_boardobjgrp_pmucmd_id_valid - These ops are assigned based on PMU version to respective chip. - Modified BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT & BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to support gp10x & gv10x branch changes - Updated struct boardobjgrp_pmu_cmd to include members needed for gv10x boardobj changes - Created "struct nv_pmu_rpc_struct_board_obj_grp_cmd" to execute BOARD_OBJ_GRP_CMD using RPC. - Defined method boardobjgrp_pmucmdsend_rpc() to send BOARD_OBJ_GRP_CMD to PMU. Change-Id: If2551bdda80e897e7b21d2966881586f3bbc7a9b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656511 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU super surface supportMahantesh Kumbar2018-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Added ops "pmu.alloc_super_surface" to create memory space for pmu super surface - Defined method nvgpu_pmu_sysmem_surface_alloc() to allocate pmu super surface memory & assigned to "pmu.alloc_super_surface" for gv100 - "pmu.alloc_super_surface" set to NULL for gp106 - Memory space of size "struct nv_pmu_super_surface" is allocated during pmu sw init setup if "pmu.alloc_super_surface" is not NULL & free if error occur. - Added ops "pmu_ver.config_pmu_cmdline_args_super_surface" to describe PMU super surface details to PMU ucode as part of pmu command line args command if "pmu.alloc_super_surface" is not NULL. - Updated pmu_cmdline_args_v6 to include member "struct flcn_mem_desc_v0 super_surface" - Free allocated memory for PMU super surface in nvgpu_remove_pmu_support() method - Added "struct nvgpu_mem super_surface_buf" to "nvgpu_pmu" struct - Created header file "gpmu_super_surf_if.h" to include interface about pmu super surface, added "struct nv_pmu_super_surface" to hold super surface members along with rsvd[x] dummy space to sync members offset with PMU super surface members. Change-Id: I2b28912bf4d86a8cc72884e3b023f21c73fb3503 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656571 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Enable IO coherency on GV100Alex Waterman2018-03-07
| | | | | | | | | | | | | | This reverts commit 848af2ce6de6140323a6ffe3075bf8021e119434. This is a revert of a revert, etc, etc. It re-enables IO coherence again. JIRA EVLR-2333 Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669722 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: use gv11b update_ctxsw_preemption_modeKirill Artamonov2018-03-07
| | | | | | | | | | | | | Use gr_gv11b_update_ctxsw_preemption_mode instead of gr_gp10b_update_ctxsw_preemption_mode. bug 1888344 Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Change-Id: I2420b64bac6393b1bb59a84235850a6cbc68be93 Reviewed-on: https://git-master.nvidia.com/r/1665663 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: BUG_ON for sema increment, not valueKonsta Holtta2018-03-07
| | | | | | | | | | | | | | | | When adding a sema wait to a pushbuf, verify that the sema threshold has been incremented from the original value by reading the incremented field instead of value (which is set to nonzero by nvgpu_semaphore_incr()). Value could be 0 even after an increment if new semas weren't reset to 0. Jira NVGPU-514 Change-Id: I295451fbc7eb9e597aea12d73074e99f74a6a899 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658100 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle semaphore pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle semaphore pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I5fee5a491c6e54344f9da477eaf5881c50335bbc Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658298 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.semaphore_wakeup HALRichard Zhao2018-03-06
| | | | | | | | | | | | | | | | | vserver handles semaphore differently from native, so it needs a callback to differentiate from native. Also created common function mc_gk20a_handle_intr_nonstall to handle all nonstall interrupts. Jira VQRM-2982 Change-Id: I1b3821717a4005ca4bf2a4dac5dcd335872f48f1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656753 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add debugger.post_events HAL opAparna Das2018-03-06
| | | | | | | | | | | | RM Server will need to set specific HAL op and notify vgpu client. Jira VQRM-2982 Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make gr functions that are used by vsrv globalRichard Zhao2018-03-06
| | | | | | | | | | | | Fixed vsrv link errors for gr unification. Jira VQRM-2982 Change-Id: Icd46792191f1a9aaefbf86d2f3c0b4d5bce2384e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle post event idAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr post event id needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658382 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle notify pendingAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr handle notify pending needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I4cb88d4d769a5d5cb98a4ee6ac3fbb74245cb5f2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658255 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op for gr set error notifierAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr set error notifier needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: Ia445a27112bb6c5587dbb81100a9dafe5875b338 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657830 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move chip detect in os specific probe codeAparna Das2018-03-06
| | | | | | | | | | | | | | | | | | This allows moving HAL overrides for vserver out of common chip specific HAL files into os specific probe code. Jira VQRM-3070 Change-Id: Icc61aacc03ac7db7a0ea1f6a2dd2b76185c74757 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656752 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: implement gfxp wfi controlsKirill Artamonov2018-03-06
| | | | | | | | | | | | | | | | | | | | /sys/devices/gpu.0/gfxp_wfi_timeout_unit usec - microseconds sysclk - gpu clock count Treat gr_fe_gfxp_wfi_timeout_r as context-switched register on gv11b. Set default gfxp_wfi_timeout to 100 usec to match gp10b at 1GHz. bug 1888344 Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed Reviewed-on: https://git-master.nvidia.com/r/1651517 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: gv11b: limit min freq to 216.75Mhz"seshendra Gadagottu2018-03-06
| | | | | | | | | | | | | | | | | | | Actual issue with low frequency is root caused, so reverting this hack. Bug 2056266 This reverts commit 9afb74dada5e318ec6b40ff4745e4d4adf8ee8b2. Change-Id: Iab4f05b4e78f681298b9bf732289de9e2026d6b3 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667549 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Enable aELPG.Deepak Goyal2018-03-06
| | | | | | | | | | | | | | | Bug 2046561 Change-Id: I625db1797d699f6e74374535a836ab1c1b0a19ce Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657214 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: modify logic for fecs arb idle checkseshendra Gadagottu2018-03-06
| | | | | | | | | | | | | | | | | Check these conditions for fecs arbiter idle: 1. Wait for gr_fecs_arb_ctx_cmd to idle 2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle Just waiting for condition 1, only guarantees dispatching of command but not completion of work. Bug 2056266 Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667550 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clean up memory leaks in gr initSeema Khowala2018-03-06
| | | | | | | | | | | | | | | | | | | | | | This is to resolve memory leak after modifying tpc_fs_mask sysfs which sets gr.sw_ready to false and forces gr to re-initialize. echo 1 > /sys/devices/gpu.0/force_idle echo 5 > /sys/devices/gpu.0/tpc_fs_mask echo 0 > /sys/devices/gpu.0/force_idle Bug 200393029 Change-Id: I76299f53fc87823071c672ec682c3eb51f72f513 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1666018 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: WARN_ON dma_alloc if mem is already validSeema Khowala2018-03-06
| | | | | | | | | | | | | | | | | | Trying to alloc mem for already valid mem will dump warn stack along with nvgpu warn message for memory leak. Bug 200393029 Change-Id: I9b5becf898deb47eecd6369c2a97e688caa4660e Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665377 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize max_subctx_count before useDeepak Goyal2018-03-06
| | | | | | | | | | | | | | | | | | | | PMU instance block layout is not getting populated with subctx pdb info as max_subctx_count is 0x0 and is getting initialized after PMU instance block initialization gets completed. Therefore initializing max_subctx_count before using it. For Volta, FECS can bind itself with the PMU instance only if SC PDB info is populated. Bug 2051863 Bug 200392620 Change-Id: Id4fc26502e189c15cb57cb36cc09387dad773dc5 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1666585 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Correct PMU PG enabled masks.Deepak Goyal2018-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | PMU ucode records supported feature list for a particular chip as support mask sent via PMU_PG_PARAM_CMD_GR_INIT_PARAM. It then enables selective feature list through enable mask sent via PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd. Right now only ELPG state machine mask was enabled. Only ELPG state machine was getting executed but other crucial steps in ELPG entry/exit sequence were getting skipped. Bug 200392620. Bug 200296076. Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665767 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""Timo Alho2018-03-05
| | | | | | | | | | | This reverts commit 89fbf39a05483917c0a9f3453fd94c724bc37375. Bug 2075315 Change-Id: Id34a0376be5160b164931926ec600f77edf69667 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1668487 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
* Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""Alex Waterman2018-03-03
| | | | | | | | | | | | | | | This reverts commit 5a35a95654d561fce09a3b9abf6b82bb7a29d74b. JIRA EVLR-2333 Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667167 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add pid field to gk20a_event_id_dataSourab Gupta2018-03-03
| | | | | | | | | | | | | | | | | | QNX is going to reuse the 'struct gk20a_event_id_data' data structure for its event handling. Contrary to linux which works on fd polling, QNX needs the information about the pid to find out which process to deliver the event to. The patch adds the requisite field. Change-Id: I466e9cfaacb921c6e1c41bd92c66194b81fc6774 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657994 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: skip channel abort for deferred resetDeepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case deferred_reset_pending is set in gk20a_fifo_handle_mmu_fault() and in gv11b_fifo_teardown_ch_tsg(), we skip resetting the engines and skip setting the error notifier Then we call gk20a_channel_abort()/gk20a_fifo_abort_tsg() which aborts the channels, and resets the syncpoint values to release all the waiters But since we don't set error notifier this could lead User to assume a successful submission without any error To fix this disable channel/TSG in case deferred_reset_pending is set and skip calls to gk20a_channel_abort()/gk20a_fifo_abort_tsg() Note that we finally abort the channel when channel is being closed Bug 200363077 Change-Id: Ia48ca369701c14d1913d8f7b66ed466b7b840224 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664319 (cherry picked from commit ac40d082e85397f100c3d8377b1f28811485def4) Reviewed-on: https://git-master.nvidia.com/r/1666445 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allocate separate client managed syncpoint for UserDeepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now allocate a nvgpu managed syncpoint in c->sync and share that with user space But to avoid conflicts between user space and kernel space increments allocate a separate "client managed" syncpoint for User space in c->user_sync Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed syncpoint from nvhost. Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a User space syncpoint is required or not Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate double syncpoints per channel on that platform For gv11b, once we move to use user space submits, support for c->sync will be dropped so we keep using only one syncpoint per channel Bug 200326065 Jira NVGPU-179 Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665828 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: support user fence updates"Deepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | This reverts commit 0c46f8a5e112c08c172ee2c692832e1753ffbcce. We should not support tracking of MAX/threshold value for syncpoint allocated by user space Hence revert this patch Bug 200326065 Jira NVGPU-179 Change-Id: I2df8f8c13fdac91c0814b11a2b7dee30153409d4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665827 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: do not alloc ctx buffers if already allocatedSeema Khowala2018-03-01
| | | | | | | | | | | | | | Bug 200393029 Change-Id: Ic2946958e34bcb9247179fcf2e8735c822155cce Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665338 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com> Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: introduce explicit nvgpu_sgl typeKonsta Holtta2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl) argument which is a void pointer. Change the type signatures to take struct nvgpu_sgl * which is an opaque marker type that makes it more difficult to pass around wrong arguments, as anything goes for void *. Explicit types add also self-documentation to the code. For some added safety, some explicit type casts are now required in implementors of the nvgpu_sgt_ops interface when converting between the general nvgpu_sgl type and implementation-specific types. This is not purely a bad thing because the casts explain clearly where type conversions are happening. Jira NVGPU-30 Jira NVGPU-52 Jira NVGPU-305 Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643555 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"Alex Waterman2018-02-28
| | | | | | | | | | | | | | Also revert other changes related to IO coherence. This may be the culprit in a recent dev-kernel lockdown. Bug 2070609 Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665914 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
* gpu: nvgpu: Use our own vmap() for coherent DMA buffersAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | For some reason the GPU does not like the mappings created by the DMA API for coherent sysmem buffers. But a plain vmap() does seem to work. To work around this, when we are using coherent sysmem, force the NO_KERNEL_MAPPING flag to on and then make a vmap() in the nvgpu DMA API wrapper. The rest of the driver will be none the wiser but will work as expected. This problem is not understood yet but it is being tracked in bug 2040115. Once this bug is understood this WAR should either be determined as necessary or reverted with an appropriate fix. Bug 2040115 JIRA EVLR-2333 Change-Id: Idae7a0c92441f0309df572ac18697af49bb6ff2b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657568 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use coherent aperture flagAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a coherent DMA API wee must make sure to program any aperture fields with the coherent aperture setting. To do this the nvgpu_aperture_mask() function was modified to take a third aperture mask argument, a coherent setting, so that code can use this function to generate coherent aperture settings. The aperture choice is some what tricky: the default version of this function uses the state of the DMA API to determine what aperture to use for SYSMEM: either coherent or non-coherent internally. Thus a kernel user need only specify the normal nvgpu_mem struct and the correct mask should be chosen. Due to many uses of nvgpu_mem structs not created directly from the DMA API wrapper it's easier to translate SYSMEM to SYSMEM_COH after creation. However, the GMMU mapping code, will encounter buffers from userspace with difference coerency attributes than the DMA API. Thus the __nvgpu_aperture_mask() really respects the aperture setting passed in regardless of the DMA API state. This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT since this is either passed in from userspace or set by the kernel when using coherent DMA. The aperture field in attrs is upgraded to coh if this flag is set. This change also adds a coherent sysmem mask everywhere that it can. There's a couple places that do not have a coherent register field defined yet. These need to eventually be defined and added. Lastly the aperture mask code has been mvoed from the Linux vm.c code to the general vm.c code since this function has no Linux dependencies. Note: depends on https://git-master.nvidia.com/r/1664536 for new register fields. JIRA EVLR-2333 Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655220 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: FIFO sched fixesAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | Miscellaneous fixes for the sched code: 1. Make sure get_addr() on an SGL respects the use phys flag since the runlist needs physical addresses when NVLINK is in use. 2. Ensure the runlist is contiguous. Since the runlist memory is not virtually addressed the buffer must be physically contiguous. 3. Use all 64 bits of the runlist address in the runlist base addr register (and related fields). JIRA EVLR-2333 Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654667 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Get coherency on gv100 + NVLINK workingAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does a couple of things. First it renames NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former is somewhat ambiguous in meaning. The latter clearly states what must happen: nvgpu needs to treat sysmem as coherent. This flag does simply follow the state of the DMA API but there's no reason to expect a casual reader of the code to know that when the DMA API is coherent nvgpu must treat sysmem as coherent. One thing to note though: when the dGPU is using PCIe and the PCIe controller is coherent, it doesn't actually matter what we do. However, we use this flag for determining how to make CPU mappings in nvgpu_mem_begin() so this flag is still relevant for the CPU side of things. Next this patch adds a check in the core kernel GMMU mapping routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag is set that the IO coherent flag is passed into the mapping code. This is the primary fix that made NVLINK start working. Finally the setting of the USE_COHERENT_SYSMEM flag and the NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for iGPUs. The iGPU also must correctly match it's CPU mappings and GPU mappings for proper operation. JIRA EVLR-2333 Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654519 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add coherent aperture target fieldAlex Waterman2018-02-27
| | | | | | | | | | | | | | | Add the coherent aperture target field for FECS_ARB_CTX_PTR_TARGET and FECS_NEW_CTX_TARGET. This is required for completeness with regard to IO coherence. JIRA EVLR-2333 Change-Id: I70576481e6af24bf2494e9cb5480e10062958235 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664538 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: remove gr_gv100_load_tpc_mask()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | No one uses it anymore. Jira VQRM-2982 Change-Id: Iac885221b670507241663a851c4a597157233756 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664653 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move common files out of linux folderRichard Zhao2018-02-27
| | | | | | | | | | | | | Most of files have been moved out of linux folder. More code could be common as halifying going on. Jira EVLR-2364 Change-Id: Ia9dbdbc82f45ceefe5c788eac7517000cd455d5e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649947 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: avoid referring uapi header when set powergate modeRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | Defined powergate mode in tegra_vgpu.h. Jira EVLR-2364 Change-Id: Id7dcaeffcf0dd8394b4eac6601152720fa382e8c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649946 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove unused uapi header includingsRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's the first step to avoid including uapi headers. Jira EVLR-2364 Change-Id: Iea572d64a7076bcd3fd1e7196c197c952c04595b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649945 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add function vgpu_is_reduced_bar1()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | The implementation is os specific for now. Jira EVLR-2364 Change-Id: I8ac390b056aa9ca5b5d4ab2ac4dbc06f6689f4a4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649944 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: split vgpu.c into vgpu.c and vgpu_linux.cRichard Zhao2018-02-27
| | | | | | | | | | | | vgpu.c will keep common code whil vgpu_linux.c is linux specific. Jira EVLR-2364 Change-Id: Ice9782fa96c256f1b70320886d3720ab0db26244 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649943 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: avoid using sg_table when map bar1Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | Move to use OS agnostic function nvgpu_mem_get_addr(). Jira EVLR-2364 Change-Id: I2f38567cae35c5d410f082785213af6052150c27 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649942 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move to nvgpu_err/info from dev_err/infoRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It helps code be more portable. Jira EVLR-2364 Change-Id: I0cc1fa739d7884d3c863975f08b3b592acd34613 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649941 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move to nvgpu_msleep()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's part of effort of unify vgpu. Jira EVLR-2364 Change-Id: Ieef95fbf1c4bdcec548e4dc741d771a74ebefb9b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649940 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove linux/string.hRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's already included indirectly. Jira EVLR-2364 Change-Id: I9ad134b82a24ffd84358d957ee9b56e1ce4558ba Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649939 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove smmu checkingsRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | Currently vgpu always disable smmu. Jira EVLR-2364 Change-Id: I54dfa5ff6bfda56975617ec526d80359bf3cf672 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649938 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove vgpu_locked_gmmu_map()Richard Zhao2018-02-27
| | | | | | | | | | | | | | The function is not used anymore. Change-Id: Iad99811e2d356362d16b961464729f5169c36f28 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649937 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move tegra_vgpu.h to include/nvgpu/vgpu/Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | tegra_vgpu.h is os agnostic, so move it out of linux folder. Jira EVLR-2364 Change-Id: Ibbe8923f7af036b3b6730f682f5243ca73810f7b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649936 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>