summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: nvgpu: runlist_lock released before preempt timeout recoverySeema Khowala2018-05-17
| | | | | | | | | | | | | | | | | Release runlist_lock and then initiate recovery if preempt timed out. Also do not issue preempt if ch, tsg or runlist id is invalid. tsgid could be invalid for below call trace gk20a_prepare_poweroff->gk20a_channel_suspend-> *_fifo_preempt_channel->*_fifo_preempt_tsg Bug 2065990 Bug 2043838 Change-Id: Ia1e3c134f06743e1258254a4a6f7256831706185 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662656 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to insert semaphore commandsDeepak Nibade2018-05-16
| | | | | | | | | | | | | | | | | | | | | Add below new HALs gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/ release sequence and set it to gops.fifo.add_sema_cmd() Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size() to return respective command buffer sizes Jira NVGPUT-16 Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1704490 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vf inject changesVaikundanathan S2018-05-14
| | | | | | | | | | | | | | | | | | | | - Added vf change inject support for gv10x - Updated clk_pmu_vf_inject() to fill required data for pascal or volta vf change inject support - Added new ctrl clk interface for gv10x clk domain list - Added pmu interface for gv10x clk domain list & vf change inject request - Modified clk cmd, msg & RPC id's to match with chips_a_23609936 branch Bug 200399373 Change-Id: Ib9dc10073386f63bdfd92110c7ec3e09b1c484ce Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com> Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1700746 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move sync_gk20a under common/linux directoryDebarshi Dutta2018-05-14
| | | | | | | | | | | | | | | | sync_gk20a.* files are no longer used by core code and only invoked from linux specific implementations of the OS_FENCE framework which are under the common/linux directory. Hence, sync_gk20a.* files are also moved under common/linux. JIRA NVGPU-66 Change-Id: If623524611373d2da39b63cfb3c1e40089bf8d22 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1712900 Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix Gpu sysfs access to Fmax@VminAlex Frid2018-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | Currently gpu sysfs retrieves Fmax@Vmin by direct call into Tegra DVFS driver that introduces compile time dependencies on CONFIG_TEGRA_DVFS. In addition incorrect clock is used for DVFS information access. Re-factored sysfs node to use generic GPU clock operation for Fmax@Vmin read. This would fix a bug in target clock selection, and allows to remove dependency of sysfs on CONFIG_TEGRA_DVFS. Updated nvgpu_linux_get_fmax_at_vmin_safe operation itself so it can be called on platforms that does not support Tegra DVFS, although 0 will still be returned as Fmax@Vmin on such platforms. Bug 2045903 Change-Id: I32cce25320df026288c82458c913b0cde9ad4f72 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1710924 Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: poll watchdog status activelyKonsta Holtta2018-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Read GP_GET and GET from hardware every time when the poll timer expires instead of when the watchdog timer expires. Restart the watchdog timer if the get pointers have increased since the previous read. This way stuck channels are detected quicker. Previously it could have taken at most twice the watchdog timeout limit for a stuck channel to get recovered; with this change, a coarse sliding window is used. The polling period is still 100 ms. The difference is illustrated in the following diagram: time 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 get a b b b b b b b b b b b b b b b b b b b b prev - n n n n n n n n n A n n n n n n n n n S next - A s s s s s s s s s S "time" represents wall time in polling units; 0 is submit time. For simplicity, watchdog timeout is ten units. "get" is the GP_GET that advances a little from a and then gets stuck at b. "prev" is the previous behaviour, "next" is after this patch. "A" is when the channel is detected as advanced, and "S" when it's found stuck and recovered; small "s" is when it's found stuck but when the time limit has not yet expired and "n" is when the hw state is not read. Bug 1700277 Bug 1982826 Change-Id: Ie2921920d5396cee652729c6a7162b740d7a1f06 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1710554 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: dump fecs method data and push adrSeema Khowala2018-05-10
| | | | | | | | | | | Dump fecs method data and adr if ucode times out or errors out. This is good to have for debugging. Change-Id: I79c4bc3bc30cfd09f273f4eb6b53863653227ecd Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1707761 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove access to mc_enable_pb_r()Deepak Nibade2018-05-10
| | | | | | | | | | | | | We don't need to configure mc_enable_pb_r() register in any of the supported chips, so remove access to this register Jira NVGPUT-52 Change-Id: I8a7a524367ce7953f926143242c6d63bc8fd5ed1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1711245 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove sync_fence dependencies from fence_gk20aDebarshi Dutta2018-05-10
| | | | | | | | | | | | | | | | | Replaced all instances of sync_fence in gk20a_fence* code with nvgpu_os_fence. Added the API install_fence for the nvgpu_os_fence abstraction. sync_fence mechanism and its dependencies are completely removed from the fence_gk20a methods. Due to the recent os_fence changes and the changes to fence_gk20a, we can finally get rid of all the CONFIG_SYNCS present in the submit path. JIRA NVGPU-66 Change-Id: I3551dab04b93b1e94db83fc102a41872be89e9ed Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1701245 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: adapt gk20a_channel_syncpt to use os_fenceDebarshi Dutta2018-05-10
| | | | | | | | | | | | | This patch adapts gk20a_channel_syncpt to use os_fence for post fence as well as pre-fence(wait) use cases. Jira NVGPU-66 Change-Id: I49627d1f88d52a53511a02f5de60fed6df8350de Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676631 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: create a wrapper over sync_fencesDebarshi Dutta2018-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch constructs an abstraction to hide the sync_fence functionality from the common code. struct nvgpu_os_fence acts as an abstraction for struct sync_fence. struct nvgpu_os_fence consists of an ops structure named nvgpu_os_fence_ops which contains an API to do pushbuffer programming to generate wait commands for the fence. The current implementation of nvgpu only allows for wait method on a sync_fence which was generated using a similar backend(i.e. either Nvhost Syncpoints or Semaphores). In this patch, a generic API is introduced which will decide the type of the underlying implementation of the struct nvgpu_os_fence at runtime and run the corresponding wait implementation on it. This patch changes the channel_sync_gk20a's semaphore specific implementation to use the abstract API. A subsequent patch will make the changes for the nvhost_syncpoint based implementations as well. JIRA NVGPU-66 Change-Id: If6675bfde5885c3d15d2ca380bb6c7c0e240e734 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1667218 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support TSG IOCTL to unbind channelDeepak Nibade2018-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We so far do not support unbinding a channel from TSG explicitly through IOCTL Channel is unbound from TSG when channel fd is closed But this could create issue in case process is forked and fd is duplicated In worst case it is possible that duplicate fd is closed sometime later when TSG is active and that could lead to corruption of TSG state Fix this by implementing NVGPU_TSG_IOCTL_UNBIND_CHANNEL API This API will simply call gk20a_tsg_unbind_channel() API to unbind the channel Note that we also mark the channel has_timedout since unbound channel does not have any context of its own and it cannot serve any job Remove call to gk20a_disable_channel() while closing the channel. We do not support bare channels without TSG, and if channel is not part of TSG then it means channel is already unbound from TSG with call to NVGPU_TSG_IOCTL_UNBIND_CHANNEL, and there is nothing to do while closing the channel Bug 200327095 Jira NVGPU-229 Change-Id: Ib7f6710f0dc8125caafabe7bee737622c3dd9fa3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1708902 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove gk20a_dbg* functionsTerje Bergstrom2018-05-09
| | | | | | | | | | | | | | | Switch all logging to nvgpu_log*(). gk20a_dbg* macros are intentionally left there because of use from other repositories. Because the new functions do not work without a pointer to struct gk20a, and piping it just for logging is excessive, some log messages are deleted. Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1704148 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: created os-agnostic sim header.Antony Clince Alex2018-05-09
| | | | | | | | | | | | | | added a os-agnostic sim.h header which could be included in by any platform, moved out os specific headers to nvgpu/linux. JIRA VQRM-2368 Change-Id: I3861bfa75a6b8d2d909bc7223467fd68c208275b Signed-off-by: Antony Clince Alex <aalex@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1702816 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: os-agnostic segregation of sim/sim_pciAntony Clince Alex2018-05-09
| | | | | | | | | | | | | | | | | | segregated os-agnostic function from linux/sim.c and linux/sim_pci.c to sim.c and sim_pci.c, while retaining os-specific functions. renamed all gk20a_* api's to nvgpu_*. renamed hw_sim_gk20a.h to nvgpu/hw_sim.h moved hw_sim_pci.h to nvgpu/hw_sim_pci.h JIRA VQRM-2368 Change-Id: I040a6b12b19111a0b99280245808ea2b0f344cdd Signed-off-by: Antony Clince Alex <aalex@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1702425 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: refactored struct sim_gk20aAntony Clince Alex2018-05-09
| | | | | | | | | | | | | | | | | | moved sim buffer(send, recv and msg) from os-specific structure to OS agnostic structure. JIRA VQRM-2368 Change-Id: I10ff23fe24d86f2bd372f1bae0369cc45aadfb80 Signed-off-by: Antony Clince Alex <aalex@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1702178 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Sourab Gupta <sourabg@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add LDIV slowdown factor in INIT cmd.Deepak Goyal2018-05-09
| | | | | | | | | | | | | | | | PMU ucode is updated to include LDIV slowdown factor in gr_init_param command. - Defined a new version gr_init_param_v2. - Updated the PMU FW version code. - Set the LDIV slowdown factor to 0x1e by default. - Added sysfs entry to program ldiv_slowdown factor at runtime. Bug 200391931 Change-Id: Ic66049588c3b20e934faff3f29283f66c30303e4 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1674208 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: enable syncpt shim for nvlinkThomas Fleury2018-05-07
| | | | | | | | | | | | | | Get host1x node reference from c1_rp device tree node, and enable syncpoints shim in case of nvlink. JIRA EVLR-2441 JIRA EVLR-2585 Change-Id: Idbf1edf656557f2ed2d3bd27393c2f4d5d1ad75a Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1663360 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to handle nonstall interruptsDeepak Nibade2018-05-07
| | | | | | | | | | | | | | | | | | | | | Add new HAL gops.mc.isr_nonstall() to handle nonstall interrupts We already handle nonstall interrupts in nvgpu_intr_nonstall() But this API is completely in linux specific code Separate out os-independent code to handle nonstall interrupts in new API mc_gk20a_isr_nonstall() and set it to HAL gops.mc.isr_nonstall() for all existing chips Call this HAL from nvgpu_intr_nonstall() Jira NVGPUT-8 Change-Id: Iec6a56db03158a72a256f7eee8989a0a8a42ae2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1706589 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add worker for clk arb work handlingSourab Gupta2018-05-07
| | | | | | | | | | | | | | | | | | | | | | | | Implement a worker thread to replace the workqueue based handling for clk arbiter update callbacks. The work items scheduled with the thread are of two types, update_vf_table and arb_update. Previously, there were two workqueues for handling these two work struct's separately. Now the single worker thread would process these two events. If a work item of a particular type is scheduled to be run on the worker, another instance of same type won't be scheduled, which mirrors the linux workqueue behavior. This also removes dependency on linux workqueues/work struct and makes code portable to be used by QNX also. Jira VQRM-3741 Change-Id: Ic27ce718c62c7d7c3f8820fbd1db386a159e28f2 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1706032 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fixups for tmake buildAlex Waterman2018-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Mostly just including necessary includes to make sure that global function declarations actually match their implementations. Also work around pointer munging warning: /build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c: In function 'nvgpu_pmu_process_init_msg': /build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c:348:4: error: dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing] (*(u32 *)gid_data.signature == PMU_SHA1_GID_SIGNATURE); Work around this warning by simply moving the type punning. This code is certainly dangerous - it assumes the endianness of the header data is the same as the machine this code is running on. Apparently it works, though, so this ignores the warning. JIRA NVGPU-525 Change-Id: Id704bae7805440bebfad51c8c8365e6d2b7a39eb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1692454 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update clk_vin interface as per chips_aVaikundanathan S2018-05-04
| | | | | | | | | | | | | | | | clk_vin data structures updated as new calibration type (v20) is added. GP106 header does not have vin calibration type. Assuming V10 if calibration type is not V20. Add fuse calibration for V20 type. Bug 200399373 Change-Id: I9449de1ecb0d0873f3bc16f46660f93fab5b9eac Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com> Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1687591 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HALs to mmu fault descriptors.Vinod G2018-05-04
| | | | | | | | | | | | | | | mmu fault information for client and gpc differ on various chip. Add separate table for each chip based on that change and add hal functions to access those descriptors. bug 2050564 Change-Id: If15a4757762569d60d4ce1a6a47b8c9a93c11cb0 Signed-off-by: Vinod G <vinodg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1704105 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add rc_type i/p param to gk20a_fifo_recoverSeema Khowala2018-05-04
| | | | | | | | | | | | | | | | | | | | | Add below rc_types to be passed to gk20a_fifo_recover MMU_FAULT PBDMA_FAULT GR_FAULT PREEMPT_TIMEOUT CTXSW_TIMEOUT RUNLIST_UPDATE_TIMEOUT FORCE_RESET SCHED_ERR This is nice to have to know what triggered recovery. Bug 2065990 Change-Id: I202268c5f237be2180b438e8ba027fce684967b6 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662619 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename mutex to runlist_lockSeema Khowala2018-05-04
| | | | | | | | | | | | | | Rename mutex to runlist_lock in fifo_runlist_info_gk20a struct. This is good to have for code readability. Bug 2065990 Bug 2043838 Change-Id: I716685e3fad538458181d2a9fe592410401862b9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662587 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix cond_wait return value in channel poll workerSourab Gupta2018-05-02
| | | | | | | | | | | | | | | | | | | | | | The return value from NVGPU_COND_WAIT_INTERRUPTIBLE in channel poll worker is wrongly compared with 0 and the boolean result is assigned to ret value used subsequently. Instead, the direct return value from NVGPU_COND_WAIT_INTERRUPTIBLE should be used. This bug seems remnant of the following patch which moved the handling from 'wait_event_timeout' to 'NVGPU_COND_WAIT'. commit 301965fb77b3dc97445957712b82ce430eaa17e3 gpu: nvgpu: Use nvgpu_cond in channel worker Change-Id: Id48e197756a6855b35a9ee0dc26d014b62ed3860 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1705976 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gr hal for fecs_ctxsw_mailbox sizeSeema Khowala2018-05-01
| | | | | | | | | | | | | | fecs_ctxsw_mailbox_size varies per chip. Use hal to get the size. Also dump fecs_ctxsw_status_1 to help debug Bug 2093809 Change-Id: I5a50281e9d78fe0e4a75d03971169e3e9679967a Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1698026 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to update doorbellDeepak Nibade2018-04-27
| | | | | | | | | | | | | | | | | | | | Add new HAL gops.fifo.ring_channel_doorbell() to update channel doorbell register and to trigger a runlist scan Set existing API gv11b_ring_channel_doorbell() to this HAL for all volta chips Jira NVGPUT-18 Change-Id: I9d5e84cf5aa7b763363d84befe169efda00a0932 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1702114 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move parameter of .vm_bind_channel from as_share to vmRichard Zhao2018-04-25
| | | | | | | | | | | | | | as_share is more os specific and not yet used on other OSes. Jira VQRM-2344 Change-Id: Ie2ed007125400484352fbab602c37a198e8a64ae Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1699842 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: save max_comptag_lines in grRichard Zhao2018-04-25
| | | | | | | | | | | | | | | | | max_comptag_lines will be used by RM server to calculate how many lines each guest can get. Jira VQRM-2345 Change-Id: If52208d79617f2f894e48d3a4daec186fda862f1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1695082 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HALs to submit and wait for runlistDeepak Nibade2018-04-24
| | | | | | | | | | | | | | | | | | | | | | | | Add below two new HALs gops.fifo.runlist_hw_submit() to submit a new runlist to hardware gops.fifo.runlist_wait_pending() to wait until runlist write is successful Set existing API gk20a_fifo_runlist_wait_pending() to gops.fifo.runlist_wait_pending HAL Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w and set it to gops.fifo.runlist_hw_submit HAL Jira NVGPUT-20 Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1700548 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: cache gpu clk rateArun Kannan2018-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | Cache the rate used in clk_set_rate(). Return that cached rate on clk_get_rate(), don't read from hardware. This cached rate is used to avoid duplicate requests to clk_set_rate(). Motivation is to support multiple governors for gpu clk. Reading clock from hardware is unreliable in multi-governor situation. Relying on hardware clock value could mislead the kernel gpu governor in its scaling calculations. Bug 2051688 Change-Id: I43fc056eea6f69fe0889c45640fcb892b658071c Signed-off-by: Arun Kannan <akannan@nvidia.com> (cherry picked from commit 7f819a9ba707e6e905168b00b0f3bf6348e86188) Reviewed-on: https://git-master.nvidia.com/r/1662759 Reviewed-on: https://git-master.nvidia.com/r/1668919 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: sync_framework cleanupsDebarshi Dutta2018-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch deals with cleanups meant to make things simpler for the upcoming os abstraction patches for the sync framework. This patch causes some substantial changes which are listed out as follows. 1) sync_timeline is moved out of gk20a_fence into struct nvgpu_channel_linux. New function pointers are created to facilitate os independent methods for enabling/disabling timeline and are now named as os_fence_framework. These function pointers are located in the struct os_channel under struct gk20a. 2) construction of the channel_sync require nvgpu_finalize_poweron_linux() to be invoked before invocations to nvgpu_init_mm_ce_context(). Hence, these methods are now moved away from gk20a_finalize_poweron() and invoked after nvgpu_finalize_poweron_linux(). 3) sync_fence creation is now delinked from fence construction and move to the channel_sync_gk20a's channel_incr methods. These sync_fences are mainly associated with post_fences. 4) In case userspace requires the sync_fences to be constructed, we try to obtain an fd before the gk20a_channel_submit_gpfifo() instead of trying to do that later. This is used to avoid potential after effects of duplicate work submission due to failure to obtain an unused fd. JIRA NVGPU-66 Change-Id: I42a3e4e2e692a113b1b36d2b48ab107ae4444dfa Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1678400 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add HAL to set ppriv timeoutsDeepak Nibade2018-04-22
| | | | | | | | | | | | | | | | | | Add new HAL gops.bus.set_ppriv_timeout_settings() to set platform specific ppriv timeouts Set this HAL for all supported GPUs for now Jira NVGPUT-35 Change-Id: I88b438a7bf381d0216e0947a16cd267461d0e8d7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1699314 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support upcoming GPUDeepak Nibade2018-04-22
| | | | | | | | | | | | | | | | | | | | In gpu_init_hal(), call NVGPU_NEXT_INIT_HAL() to initialize HAL of upcoming GPU All upcoming GPU related support is compiled only if CONFIG_TEGRA_GPU_NEXT is set Jira NVGPUT-42 Change-Id: I1563acd60f20fda50f4557a068398c1d5d224f3e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1699312 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: post dbg session event from os specific codeSourab Gupta2018-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As part of debug session unification following changes are required. -Including bug.h header file to fix the compilation issue on QNX - The mechanism of posting debug events is OS specific. In Linux this works through poll fd, wherein we can make use of nvgpu_cond variables to poll and trigger the corresponding wait_queue via nvgpu_cond_broadcast_interruptible() call. The post event functionality on QNX doesn't work on poll though. It uses iofunc_notify_trigger to post the debug events to calling process. As such QNX can't work with nvgpu_cond's. To overcome this issue, it is proposed to create a OS specific interface for posting debugger events. Linux can call nvgpu_cond_broadcast_interruptible() in its implementation, which makes sense since these are already initialized and poll'ed in the Linux specific code only. QNX can implement this interface to call iofunc_notify_* functions, as per its need Jira VQRM-2363 Change-Id: I0abdc0787f771040b8aff5384290d7e6549f81fb Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Signed-off-by: Prateek Sethi <prsethi@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1696368 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: add hal op for gr set error notifier"Richard Zhao2018-04-19
| | | | | | | | | | | | | | | This reverts commit d6c6c6c483478654b34685b9e13ed160bad49a1c. RM server has moved to gops.fifo.set_error_notifier. gops.gr.set_error_notifier is not needed anymore. Jira VQRM-3058 Change-Id: I0fe7f914778ce66701a699aece2b36a5cd8079da Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: pass pid/tid from os specific code to common codeRichard Zhao2018-04-16
| | | | | | | | | | | | | | | | | | linux driver runs in user's process but qnx driver has dedicate driver process, so they have different way to get user pid. nvgpu common code expect calls from os specific code pass pid/tid. ce/cde open channel for internal use, we use driver pid. Jira VQRM-3534 Change-Id: I892372ac5f1dc4d25f9928d16992bcc659d12a56 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694145 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast listDeepak Nibade2018-04-16
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs and just calculate the unicast list assuming all FBPAs are present This generates incorrect list of unicast addresses Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr Set gr_gv100_get_active_fpba_mask() for GV100 Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate unicast list only for active FBPAs Bug 200398811 Jira NVGPU-556 Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694444 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update clk_fll interface as per chips_aTejal Kudav2018-04-12
| | | | | | | | | | | | | | | | | | | Two new members added to fll struct and code modified to support GV100 VBIOS NAFLL tables Add g->ops for getting vbios clk domains JIRA NVGPUGV100-39 Change-Id: Iaabea893d55d44a272e2bce2b1d525b122cd36f5 Signed-off-by: Tejal Kudav <tkudav@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594289 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com> Tested-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add conversion function for uapi submit gpfifo flagsSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | The submit gpfifo flags are splattered everywhere inside the nvgpu code. Though the usage is inside nvgpu Linux code only, still it needs to be gotten rid of and replaced with the defines present in common code. VQRM-3465 Change-Id: I901b33565b01fa3e1f9ba6698a323c16547a8d3e Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691979 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove usage of nvgpu_gpfifoSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | | Remove the usage of nvgpu_gpfifo splattered across nvgpu, and replace with a struct defined in common code. The usage is still inside Linux, but this helps the subsequent unification efforts, e.g. to unify the submit path. VQRM-3465 Change-Id: I9e5ac697a0c7f85239ddba319085c09481d20d6b Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691978 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove usage of nvgpu_fenceSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | | Remove the usage of nvgpu_fence splattered across nvgpu, and replace with a struct defined in common code. The usage is still inside Linux, but this helps the subsequent unification efforts, e.g. to unify the submit path. VQRM-3465 Change-Id: Ic3737450123dfc5e1c40ca5b6b8d8f6b3070aa0d Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691977 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix gpc/tpc index for SMPC broadcast conversionDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | In gv11b_gr_egpc_etpc_priv_addr_table(), we call gv11b_gr_update_priv_addr_table_smpc() to convert SMPC broadcast address into list of unicast addresses But before calling gv11b_gr_update_priv_addr_table_smpc() we sometimes incorrectly set gpc_num/tpc_num to zero and that leads to generating incorrect list of unicast addresses Remove this incorrect initialization of gpc_num/tpc_num Also update gv11b_gr_egpc_etpc_priv_addr_table() to receive tpc_num along with gpc_num Bug 2099717 Jira NVGPU-580 Change-Id: Idd4e5f78dbe6ca1800efae93c66355d06417d1f2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691373 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL for chiplet offsetDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and NV_PMM_FBP_STRIDE which are incorrect for Volta Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta Use HAL instead of hard coded values wherever required Bug 200398811 Jira NVGPU-556 Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690028 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support to get unicast addresses on voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We have new broadcast registers on Volta, and we need to generate correct unicast addresses for them so that we can write those registers to context image Add new GR HAL create_priv_addr_table() to do this conversion Set gr_gk20a_create_priv_addr_table() for older chips Set gr_gv11b_create_priv_addr_table() for Volta gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate appriate list of unicast register for each broadcast register Bug 200398811 Jira NVGPU-556 Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690027 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add broadcast address decode support for voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Volta we have more number of broadcast registers than previous chips and we don't decode them right now in gr_gk20a_decode_priv_addr() Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all previous chips Add and use gr_gv11b_decode_priv_addr() for Volta gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set the broadcast flags apporiately Define below new broadcast types PRI_BROADCAST_FLAGS_PMMGPC PRI_BROADCAST_FLAGS_PMM_GPCS PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB PRI_BROADCAST_FLAGS_PMMFBP PRI_BROADCAST_FLAGS_PMM_FBPS PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP Bug 200398811 Jira NVGPU-556 Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690026 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: fix build errors on qnxRichard Zhao2018-04-10
| | | | | | | | | | | | | | | | | | - Declare global functions before reaching the implementation. - avoid using current (current process). - assign ch->pid/tgid before using them. Jira VFND-4870 Change-Id: I688a1b89ef4d5dcf046929eab11d7e523caba0a5 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1687142 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: remove gk20a_init_bus functionShashank Singh2018-04-07
| | | | | | | | | | | | | | | | | | | - gk20a_init_bus is not called from nvgpu, better remove it so that qnx can build bus_gk20a.c. QNX otherwise require declaration for non-static functions. Change-Id: I2a6dff951ae0b4ea1193ca05435b5587f8172b1e Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1689261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gk20a: Use ENOSPC instead of ENOMEMmpurohit2018-04-06
| | | | | | | | | | | | | | | | | | | | | | Modify gr_gk20a_add_zbc() to return -ENOSPC instead of -ENOMEM when all slots are already used. ENOMEM is more meant for memory allocation failures, anyway. This is required for porting nvgpu_gpu_zbc test case using libnvrm_gpu APIs. Bug 1967537 JIRA VQRM-3348 Change-Id: Ib7bcb6ba94d2ca168bcad517a8a7260fbf278a91 Signed-off-by: Mahesh Purohit <mpurohit@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1688302 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>