summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/fifo_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: runlist info mutex not needed for runlist_stateSeema Khowala2018-01-11
| | | | | | | | | | | | | | | | runlist_info mutex for the runlist being enabled or disabled in fifo_sched_disable_r is not needed to be acquired Bug 2043838 Change-Id: Ia9839ab7effbe7daf353c3a54f25a2b4914af5e8 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1630345 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* DNI: gpu: nvgpu: Increase GV100 ctxsw timeoutsDavid Nieto2018-01-05
| | | | | | | | | | | | | | | During bringup and before nvlink is up GV100 on the DDPX platform operates with a very, very slow sysmem link. In order to get sysmem test to pass it is neccesary to significantly increase most timeouts by an order the magnitude. Bug 2040544 Change-Id: I26858afde4ae80c70f86b47cfff674b6b00b5bf8 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627417 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove bare channel schedulingTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | Remove scheduling IOCTL implementations for bare channels. Also removes code that constructs bare channels in runlist. Bug 1842197 Change-Id: I6e833b38e24a2f2c45c7993edf939d365eaf41f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627326 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for channel eventsTerje Bergstrom2017-12-28
| | | | | | | | | | | | | | Remove support for events for bare channels. All users have already moved to TSGs and TSG events. Bug 1842197 Change-Id: Ib3ff68134ad9515ee761d0f0e19a3150a0b744ab Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1618906 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list APIs instead of linux APIsDeepak Nibade2017-12-08
| | | | | | | | | | | | | | | | | Use nvgpu specific list APIs nvgpu_list_for_each_entry() instead of calling Linux specific list APIs list_for_each_entry() Jira NVGPU-444 Change-Id: I3c1fd495ed9e8bebab1f23b6769944373b46059b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612442 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: cleanup uapi header includesDeepak Nibade2017-11-28
| | | | | | | | | | | | | | | | | | | | | With recent rework in nvgpu most of the <uapi/linux/nvgpu.h> includes are not needed so remove them Remove use of NVGPU_DBG_GPU_REG_OP_* in gk20a/gr_gk20a.c and use common definition instead Remove use of NVGPU_ALLOC_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE in gp10b/fifo_gp10b.c by defining new common flag NVGPU_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE and then parsing it in API nvgpu_gpfifo_user_flags_to_common_flags() Jira NVGPU-363 Change-Id: I8e653275ea3f443f24be7284d54f2115636aba3f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1606108 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: define error_notifiers in common codeDeepak Nibade2017-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All the linux specific error_notifier codes are defined in linux specific header file <uapi/linux/nvgpu.h> and used in all the common driver But since they are defined in linux specific file, we need to move all the uses of those error_notifiers in linux specific code only Hence define new error_notifiers in include/nvgpu/error_notifier.h and use them in the common code Add new API nvgpu_error_notifier_to_channel_notifier() to convert common error_notifier of the form NVGPU_ERR_NOTIFIER_* to linux specific error notifier of the form NVGPU_CHANNEL_* Any future additions to error notifiers requires update to both the form of error notifiers Move all error notifier related metadata from channel_gk20a (common code) to linux specific structure nvgpu_channel_linux Update all accesses to this data from new structure instead of channel_gk20a Move and rename below APIs to linux specific file and declare them in error_notifier.h nvgpu_set_error_notifier_locked() nvgpu_set_error_notifier() nvgpu_is_error_notifier_set() Add below new API and use it in fifo_vgpu.c nvgpu_set_error_notifier_if_empty() Include <nvgpu/error_notifier.h> wherever new error_notifier codes are used NVGPU-426 Change-Id: Iaa5bfc150e6e9ec17d797d445c2d6407afe9f4bd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593361 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLsDeepak Nibade2017-11-15
| | | | | | | | | | | | | | | TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using combination of timeslice and interleave levels to decide the priority Hence remove the IOCTLs and all corresponding APIs Jira NVGPU-393 Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add rc input param to gk20a_fifo_handle_pbdma_intrSeema Khowala2017-11-15
| | | | | | | | | | | | | | | | | | | Add a new parameter rc to gk20a_fifo_handle_pbdma_intr so that it can be called to handle pbdma intr without doing teardown. This is needed for t19x during polling of pbdma preempt Bug 200277163 Bug 1945121 Change-Id: Ide0d3b6ed8c0862cb5332d112926b6933abd0815 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584734 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: get intr mask for an active_engine_idSeema Khowala2017-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | This is needed for t19x during eng preempt done polling. E.g. copy engine (CE) stall interrupt should not prevent GR from finishing preemption. In order to check if current stall interrupt is valid for the engine being polled for preemption completion, function to provide engine intr mask is needed. With this, polling code can make sure there are no stall interrupts pending for the engine being polled for preemption done. If stall interrupts are pending for an engine, preemption will never finish. Bug 200277163 Bug 1945121 Change-Id: Ie1ccac52c3e8d453a49084e195f2e7eaafb8f057 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584065 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Include UAPI explicitlyTerje Bergstrom2017-11-13
| | | | | | | | | | | | | Add explicit #includes for <uapi/linux/nvgpu.h> for source code files that depend on it. JIRA NVGPU-259 Change-Id: I717d5f1493423fd3a7a34b6dd3380d33a9307a09 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596254 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: define runlist level in common codeDeepak Nibade2017-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | All the runlist levels NVGPU_RUNLIST_INTERLEAVE_LEVEL_* are declared in linux specific uapi header and used in common code But since common code should be linux-independent, move these uses out of common code Define new runlist levels NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* in common code and use them wherever required Add new API nvgpu_get_common_runlist_level() to get common runlist level of the form NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* from linux specific runlist level of the form NVGPU_RUNLIST_INTERLEAVE_LEVEL_* Jira NVGPU-259 Change-Id: Ic19239f0f8275683d5d1b981df530acd90e6dfbb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594327 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: WAR: unbind_channel_verify_status ret val"Deepak Nibade2017-11-02
| | | | | | | | | | | | | | | | | | | This reverts commit a8643b3a99af5011f4e933967f8a09f227ce4949. Proper resolution is implemented in user space (nvrm_gpu) to idle all channels of TSG before unbinding a channel from it And with that we should not see NEXT bit set on channel while unbinding Hence revert the WAR and again return error if we find NEXT bit set on channel Also restore the error print Bug 200327095 Change-Id: Id58e00c4602a4a5c9f65e5ee1329b606f45993d7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585991 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split ctxsw_trace API into non-Linux componentAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | Split the ctxsw trace "core" API code into <nvgpu/ctxsw_trace.h>. This is not perect though since there's some Linuxisms present in the HAL and as such that code has to be hidden by the ctxsw tracing CONFIG. But this patch should work for QNX such that it will allow the code to build as long as CONFIG_GK20A_CTXSW_TRACE is not set. Also fix the copywrite notice in the ctxsw code present under common/linux to be GPL. JIRA NVGPU-287 Change-Id: I94715864caf335b7220185492e4629d713b025e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589429 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: save act_eng_bitmask in runlist_infoSeema Khowala2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue Currently bitmask of engine indices is being saved. This will give wrong active engine ids for a given runlist and s/w will end up checking/polling wrong engine_status registers as these registers are indexed by active engine ids. Also reset_eng_bitmask will end up having wrong value for active engine ids to be reset. Details for runlists serving engines ids for gv11b are:- runlist id 0: gr = 0, grcopy 0 = 2, grcopy1 = 3 runlist id 1: async ce = 1 Incorrect values init_runlist:705 [DBG] runlist 0 : eng bitmask 7 (eng 0, 1, 2) init_runlist:705 [DBG] runlist 1 : eng bitmask 8 (eng 3) Fix Save bitmask of active engine ids in runlist info. Right value init_runlist:705 [DBG] runlist 0 : eng bitmask d (eng 0, 2, 3) init_runlist:705 [DBG] runlist 1 : eng bitmask 2 (eng 1) Bug 200277163 Bug 1945121 Change-Id: Ia299aa0881c4a258080bb0daa3a542fef0d94e4f Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584066 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: halt gr pipe & gr_reset do not work on fmodelSeema Khowala2017-11-01
| | | | | | | | | | | | | | | | Do not halt gr pipe as it will hang simulator. Also during ch/tsg teardown, gr_reset without halting gr pipe crashes simulator. Bug 1958308 Change-Id: I4036b4a4999932f05a0f292d4fc51de21d3a030a Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566575 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move ctxsw_trace_gk20a.c to common/linuxAlex Waterman2017-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate ctxsw_trace_gk20a.c to common/linux/ctxsw_trace.c. This has been done becasue the ctxsw tracing code is currently too tightly tied to the Linux OS due to usage of a couple system calls: - poll() - mmap() And general Linux driver framework code. As a result pulling the logic out of the FECS tracing code is simply too large a scope for time time being. Instead the code was just copied as much as possible. The HAL ops for the FECS code was hidden behind the FECS tracing config so that the vm_area_struct is not used when QNX does not define said config. All other non-HAL functions called by the FECS ctxsw tracing code ha now also been hidden by this config. This is not pretty but for the time being it seems like the way to go. JIRA NVGPU-287 Change-Id: Ib880ab237f4abd330dc66998692c86c4507149c2 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: skip channel status verification if TSG has timed outDeepak Nibade2017-10-29
| | | | | | | | | | | | | | | | | | | In gk20a_fifo_tsg_unbind_channel(), we always verify channel status before unbinding a channel from TSG But in case TSG has alread timed out we never re-enable it so it does not make sense to inspect channel status anyways So skip channel status verification in case TSG has timed out Bug 200327095 Change-Id: Iccf601271290643c235c3f2656201549210a6886 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586015 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: don't re-enable TSG if timed outDeepak Nibade2017-10-26
| | | | | | | | | | | | | | | | | | | | | | | | In gk20a_fifo_tsg_unbind_channel(), we disable/preempt TSG, unbind one channel from TSG, and then re-enable rest of the channels in TSG But it is possible that TSG has already timed out due to some error and is already disabled If we re-enable all channels in such case, it can cause random issues right after re-enabling faulted channel Hence do not re-enable TSG if it has timedout Since we disable all channels of TSG if one channel encounters fatal error, it is safe to assume that TSG has timed out if one channel has timed out Bug 1958308 Bug 200327095 Change-Id: I958ca6a2b408ff1338f2e551a79c072f1e203eda Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585421 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use full system barrier in BAR1 testThomas Fleury2017-10-25
| | | | | | | | | | | | | | | | | BAR1 test could occasionally fail when doing CPU write through userd then reading back through BAR1. This is because nvgpu_smp_mb() only guarantees ordering between cores. Replaced with nvgpu_mb() to ensure the write will be visible to all bus masters in the system. JIRA EVLR-1959 Bug 200352099 Change-Id: Id002e73d135e0805fca2f153a6de77e210a7b226 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1582928 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move function to query rl interleaveTerje Bergstrom2017-10-23
| | | | | | | | | | | | Function to query interleave name depends on IOCTL flag definition. Move that code to fifo_gk20a.c to remove Linux dependency in header. Change-Id: I6d6a80e550bf30973b2be09febc2347890b77d25 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577249 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use nvgpu_rwsem as TSG channel lockTerje Bergstrom2017-10-17
| | | | | | | | | | | | | Use abstract nvgpu_rwsem as TSG channel list lock instead of the Linux specific rw_semaphore. JIRA NVGPU-259 Change-Id: I41a38b29d4651838b1962d69f102af1384e12cb6 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1579935 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance pbdma debug infoseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | Enhanced pbdma error output to print pbdma interrupt error. Generated following hw definitions to dump relevant data: pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Updated gk20a_dump_pbdma_status to dump this additional info: pbdma_gp_put_r pbdma_gp_get_r pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Bug 2003671 Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572182 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ctxsw_trace_tsg/channel_reset call orderSeema Khowala2017-10-13
| | | | | | | | | | | | For non fake mmu fault, both tsg and ch pointers could be valid. If tsg pointer is non null, issue ctxsw_trace for tsg instead of channel only. Change-Id: I161c40e8d43c7ae4d953ef4768ad75d4e993c87e Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577915 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: WAR: unbind_channel_verify_status ret valSeema Khowala2017-10-13
| | | | | | | | | | | | | | Do not return error if channel to be removed has NEXT set. This is a WAR until proper fix is identified and implemented. Bug 200327095 Change-Id: Ia77f3b834e8e577ac2dad8281f1dd562079adcef Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577133 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix channel status verify sequenceDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | While unbindin a channel from TSG, we first disable all the channels, then examine the status of channel being removed in gk20a_fifo_tsg_unbind_channel_verify_status(), and if this API fails we re-enable all the channel and kill whole TSG And in gk20a_fifo_tsg_unbind_channel_verify_status() we first check ctx_reload and fault status and then check NEXT status If channel has NEXT set we bail out But since we have already changed the TSG ctx_reload status re-enabling all channels in TSG might cause issues Hence fix this by correcting sequence so that we first ensure that NEXT is not set on channel and then only alter the status Bug 200327095 Change-Id: I4f0786bc507fad5462d4cdd8d0ca91ea611ee3b5 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575905 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: report GR_SEMAPHORE_TIMEOUT on PBDMA sema timeoutKonsta Holtta2017-10-10
| | | | | | | | | | | | | | | | | | As with GR's semaphore acquires that timeout, report NVGPU_CHANNEL_GR_SEMAPHORE_TIMEOUT to userspace in the error notifier also when a semaphore acquire timeout interrupt is received from PBDMA. This timeout is used when the kernel watchdog timer is enabled. Bug 1782480 Change-Id: I1ceb8632548c5e89febb2b80a5850116a2d4b670 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574293 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: kill TSG if channel has NEXT set while closingDeepak Nibade2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | Currently if channel has NEXT bit set while closing the channel we just print an error and continue channel unbind sequence from TSG But since channel with NEXT set is active killing it can potentially corrupt the TSG context and cause unpredictable errors on remaining channels/TSG Hence fix this by killing whole TSG context if channel being closed has NEXT bit set if gk20a_fifo_tsg_unbind_channel() API returns error, kill the TSG otherwise continue with channel unbind sequence Bug 200327095 Change-Id: I2abf1a3db8ba6f105b6ca86e78006c7b2a7726cc Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568566 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: verify channel status while closing per-platformDeepak Nibade2017-10-04
| | | | | | | | | | | | | | | | We right now call gk20a_fifo_tsg_unbind_channel_verify_status() to verify channel status while unbinding a channel from TSG while closing Add support to do this verification per-platform and keep this disabled for vgpu platforms Bug 200327095 Change-Id: I19fab41c74d10d528d22bd9b3982a4ed73c3b4ca Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572368 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu_pmu_disable_elpg() status checkMahantesh Kumbar2017-09-27
| | | | | | | | | | | | | | | | | | | | | - Added status check for nvgpu_pmu_disable_elpg() return value & prints error information upon failure. - Below CID's are due to missing status check of function nvgpu_pmu_disable_elpg() return value, so this CL helps to fix it 2624546 2624547 2624548 Bug 200291879 Change-Id: I263fc6bc9e2667af478bfd7160fe205167556f99 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565998 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix verbose return valueSeema Khowala2017-09-18
| | | | | | | | | | | verbose return value is not taking all the channels into account. Fix this by ORing verbose values for all channels. Change-Id: Id77c74458067c72792422aa69be1626c3d164e1c Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549645 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix channel unbind sequence from TSGDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now remove a channel from TSG list and disable all the channels in TSG while removing a channel from TSG With this sequence if any one channel in TSG is closed, rest of the channels are set as timed out and cannot be used anymore We need to fix this sequence as below to allow removing a channel from active TSG so that rest of the channels can still be used - disable all channels of TSG - preempt TSG - check if CTX_RELOAD is set if support is available if CTX_RELOAD is set on channel, it should be moved to some other channel - check if FAULTED is set if support is available - if NEXT is set on channel then it means channel is still active print out an error in this case for the time being until properly handled - remove the channel from runlist - remove channel from TSG list - re-enable rest of the channels in TSG - clean up the channel (same as regular channels) Add below fifo operations to support checking channel status g->ops.fifo.tsg_verify_status_ctx_reload g->ops.fifo.tsg_verify_status_faulted Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106 as gm20b_fifo_tsg_verify_status_ctx_reload() This API will check if channel to be released has CTX_RELOAD set, if yes CTX_RELOAD needs to be moved to some other channel in TSG Remove static from channel_gk20a_update_runlist() and export it Bug 200327095 Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560637 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix TSG enable sequenceDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | Due to a h/w bug in Maxwell and Pascal we first need to enable all channels with NEXT and CTX_RELOAD set in a TSG, and then rest of the channels should be enabled Add this sequence to gk20a_tsg_enable() Add new APIs to enable/disable scheduling of TSG runlist gk20a_fifo_enable_tsg_sched() gk20a_fifo_disble_tsg_sched() Add new APIs to check if channel has NEXT or CTX_RELOAD set gk20a_fifo_channel_status_is_next() gk20a_fifo_channel_status_is_ctx_reload() Bug 1739362 Change-Id: I4891cbd7f22ebc1e0bf32c52801002cdc259dbe1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560636 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support platform specific TSG enable/disableDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | Add platform specific operations to enable/disable a TSG and use them instead of directly calling enable/disable APIs For gm20b/gp106/gp10b we continue to use gk20a_enable_tsg() and gk20a_disable_tsg() as platform specific operations Bug 1739362 Change-Id: I2dd0f38c8303757e8c7a47d8da0e30a790e514f0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560635 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLISTDavid Li2017-08-31
| | | | | | | | | | | | | | | | | | NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire current timeslice and reschedule from front of runlist. This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a channel start sooner after submit rather than waiting for natural timeslice expiration or block/finish of currently running channel. Bug 1968813 Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0 Signed-off-by: David Li <davli@nvidia.com> Reviewed-on: http://git-master/r/#/c/1537198 Reviewed-on: https://git-master.nvidia.com/r/1537198 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Nvgpu abstraction for linux barriers.Debarshi Dutta2017-08-22
| | | | | | | | | | | | | | | | | construct wrapper nvgpu_* methods to replace mb,rmb,wmb,smp_mb,smp_rmb,smp_wmb,read_barrier_depends and smp_read_barrier_depends. NVGPU-122 Change-Id: I8d24dd70fef5cb0fadaacc15f3ab11531667a0df Signed-off-by: Debarshi <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541199 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add wrapper over atomic_t and atomic64_tDebarshi Dutta2017-08-17
| | | | | | | | | | | | | | | | | - added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over atomic_t and atomic64_t - added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above wrappers. JIRA NVGPU-121 Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533044 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: fix pbdma_id searching when init engine infoRichard Zhao2017-08-11
| | | | | | | | | | | | | | The searching loop should break once find a match. Jira VFND-3797 Change-Id: I43ef553a3e90afb00ee9a4df7d269b7c6616b18e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1535304 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove mm.get_iova_addrAlex Waterman2017-08-04
| | | | | | | | | | | | | | | | | | | | | | Remove the mm.get_iova_addr() HAL and replace it with a new HAL called mm.gpu_phys_addr(). This new HAL provides the real phys address that should be passed to the GPU from a physical address obtained from a scatter list. It also provides a mechanism by which the HAL code can add extra bits to a GPU physical address based on the attributes passed in. This is necessary during GMMU page table programming. Also remove the flags argument from the various address functions. This flag was used for adding an IO coherence bit to the GPU physical address which is not supported. JIRA NVGPU-30 Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530864 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add sm_debugger_attached gr opsSeema Khowala2017-07-05
| | | | | | | | | | | | | This is required to support t19x sm register address changes JIRA GPUT19X-75 Change-Id: I7f961147e0e6464a71e240487f7bc964b0544e5d Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master/r/1512213 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove gk20a supportTerje Bergstrom2017-06-30
| | | | | | | | | | | | Remove gk20a support. Leave only gk20a code which is reused by other GPUs. JIRA NVGPU-38 Change-Id: I3d5f2bc9f71cd9f161e64436561a5eadd5786a3b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507927 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: rename hw_chid to chidRichard Zhao2017-06-30
| | | | | | | | | | | | | hw_chid is a relative id for vgpu. For native it's same as hw id. Renaming it to chid to avoid confusing. Jira VFND-3796 Change-Id: I1c7924da1757330ace715a7c52ac61ec9dc7065c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master/r/1509530 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fifo ops for handling pbdma_intr_1Seema Khowala2017-06-27
| | | | | | | | | | | | This is needed to handle new pbmda intr_1 in t19x JIRA GPUT19X-47 Change-Id: If75de0b57f3f18420aff07ee99feaad67ac63752 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master/r/1329373 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hold power ref for deterministic channelsKonsta Holtta2017-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support deterministic channels even with platforms where railgating is supported, have each deterministic-marked channel hold a power reference during their lifetime, and skip taking power refs for jobs in submit path for those. Previously, railgating blocked deterministic submits in general because of gk20a_busy()/gk20a_idle() calls in submit path possibly taking time and more significantly because the gpu may need turning on which takes a nondeterministic and long amount of time. As an exception, gk20a_do_idle() can still block deterministic submits until gk20a_do_unidle() is called. Add a rwsem to guard this. VPR resize needs do_idle, which conflicts with deterministic channels' requirement to keep the GPU on. This is documented in the ioctl header now. Make NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_NO_JOBTRACKING always set in the gpu characteristics now that it's supported. The only thing left now blocking NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_FULL is the sync framework. Make the channel debug dump show which channels are deterministic. Bug 200291300 Jira NVGPU-70 Change-Id: I47b6f3a8517cd6e4255f6ca2855e3dd912e4f5f3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1483038 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: moved pg out from pmu_gk20a.c/hMahantesh Kumbar2017-06-13
| | | | | | | | | | | | | | | | | | | - moved pg related code to pmu_pg.c under common/pmu folder PG state machine support methods PG ACK handlers AELPG methods PG enable/disable methods -prepended with nvgpu_ for elpg/aelpg global methods by replacing gk20a_ JIRA NVGPU-97 Change-Id: I2148a69ff86b5c5d43c521ff6e241db84afafd82 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1498363 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: reorganize PMU IPCMahantesh Kumbar2017-06-09
| | | | | | | | | | | | | | | | | | | | - Moved PMU IPC related code to drivers/gpu/nvgpu/common/pmu/pmu_ipc.c file, -Below is the list which are moved seq mutex queue cmd/msg post & process event handling NVGPU-56 Change-Id: Ic380faa27de4e5574d5b22500125e86027fd4b5d Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1478167 GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use nvgpu specific nvhost APIsDeepak Nibade2017-06-08
| | | | | | | | | | | | | | | | | | | | | | | | | | Remove use of linux specifix header files <linux/nvhost.h> and <linux/nvhost_ioctl.h> and use nvgpu specific header file <nvgpu/nvhost.h> instead This is needed to remove all Linux dependencies from nvgpu driver Replace all nvhost_*() calls by nvgpu_nvhost_*() calls from new nvgpu library Remove platform device pointer host1x_dev from struct gk20a and add struct nvgpu_nvhost_dev instead Jira NVGPU-29 Change-Id: Ia7af70602cfc16f9ccc380752538c05a9cbb8a67 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1489726 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Remove extraneous VM init/deinit APIsAlex Waterman2017-06-06
| | | | | | | | | | | | | | | | | | | | Support only VM pointers and ref-counting for maintaining VMs. This dramatically reduces the complexity of the APIs, avoids the API abuse that has existed, and ensures that future VM usage is consistent with current usage. Also remove the combined VM free/instance block deletion. Any place where this was done is now replaced with an explict free of the instance block and a nvgpu_vm_put(). JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: Ib73e8d574ecc9abf6dad0b40a2c5795d6396cc8c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1480227 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>