summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include
Commit message (Collapse)AuthorAge
* gpu: nvgpu: allocate separate client managed syncpoint for UserDeepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now allocate a nvgpu managed syncpoint in c->sync and share that with user space But to avoid conflicts between user space and kernel space increments allocate a separate "client managed" syncpoint for User space in c->user_sync Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed syncpoint from nvhost. Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a User space syncpoint is required or not Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate double syncpoints per channel on that platform For gv11b, once we move to use user space submits, support for c->sync will be dropped so we keep using only one syncpoint per channel Bug 200326065 Jira NVGPU-179 Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665828 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: introduce explicit nvgpu_sgl typeKonsta Holtta2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl) argument which is a void pointer. Change the type signatures to take struct nvgpu_sgl * which is an opaque marker type that makes it more difficult to pass around wrong arguments, as anything goes for void *. Explicit types add also self-documentation to the code. For some added safety, some explicit type casts are now required in implementors of the nvgpu_sgt_ops interface when converting between the general nvgpu_sgl type and implementation-specific types. This is not purely a bad thing because the casts explain clearly where type conversions are happening. Jira NVGPU-30 Jira NVGPU-52 Jira NVGPU-305 Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643555 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"Alex Waterman2018-02-28
| | | | | | | | | | | | | | Also revert other changes related to IO coherence. This may be the culprit in a recent dev-kernel lockdown. Bug 2070609 Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665914 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
* gpu: nvgpu: Use coherent aperture flagAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a coherent DMA API wee must make sure to program any aperture fields with the coherent aperture setting. To do this the nvgpu_aperture_mask() function was modified to take a third aperture mask argument, a coherent setting, so that code can use this function to generate coherent aperture settings. The aperture choice is some what tricky: the default version of this function uses the state of the DMA API to determine what aperture to use for SYSMEM: either coherent or non-coherent internally. Thus a kernel user need only specify the normal nvgpu_mem struct and the correct mask should be chosen. Due to many uses of nvgpu_mem structs not created directly from the DMA API wrapper it's easier to translate SYSMEM to SYSMEM_COH after creation. However, the GMMU mapping code, will encounter buffers from userspace with difference coerency attributes than the DMA API. Thus the __nvgpu_aperture_mask() really respects the aperture setting passed in regardless of the DMA API state. This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT since this is either passed in from userspace or set by the kernel when using coherent DMA. The aperture field in attrs is upgraded to coh if this flag is set. This change also adds a coherent sysmem mask everywhere that it can. There's a couple places that do not have a coherent register field defined yet. These need to eventually be defined and added. Lastly the aperture mask code has been mvoed from the Linux vm.c code to the general vm.c code since this function has no Linux dependencies. Note: depends on https://git-master.nvidia.com/r/1664536 for new register fields. JIRA EVLR-2333 Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655220 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Get coherency on gv100 + NVLINK workingAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does a couple of things. First it renames NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former is somewhat ambiguous in meaning. The latter clearly states what must happen: nvgpu needs to treat sysmem as coherent. This flag does simply follow the state of the DMA API but there's no reason to expect a casual reader of the code to know that when the DMA API is coherent nvgpu must treat sysmem as coherent. One thing to note though: when the dGPU is using PCIe and the PCIe controller is coherent, it doesn't actually matter what we do. However, we use this flag for determining how to make CPU mappings in nvgpu_mem_begin() so this flag is still relevant for the CPU side of things. Next this patch adds a check in the core kernel GMMU mapping routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag is set that the IO coherent flag is passed into the mapping code. This is the primary fix that made NVLINK start working. Finally the setting of the USE_COHERENT_SYSMEM flag and the NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for iGPUs. The iGPU also must correctly match it's CPU mappings and GPU mappings for proper operation. JIRA EVLR-2333 Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654519 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add coherent aperture target fieldAlex Waterman2018-02-27
| | | | | | | | | | | | | | | Add the coherent aperture target field for FECS_ARB_CTX_PTR_TARGET and FECS_NEW_CTX_TARGET. This is required for completeness with regard to IO coherence. JIRA EVLR-2333 Change-Id: I70576481e6af24bf2494e9cb5480e10062958235 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664538 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move common files out of linux folderRichard Zhao2018-02-27
| | | | | | | | | | | | | Most of files have been moved out of linux folder. More code could be common as halifying going on. Jira EVLR-2364 Change-Id: Ia9dbdbc82f45ceefe5c788eac7517000cd455d5e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649947 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: avoid referring uapi header when set powergate modeRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | Defined powergate mode in tegra_vgpu.h. Jira EVLR-2364 Change-Id: Id7dcaeffcf0dd8394b4eac6601152720fa382e8c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649946 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: split vgpu.c into vgpu.c and vgpu_linux.cRichard Zhao2018-02-27
| | | | | | | | | | | | vgpu.c will keep common code whil vgpu_linux.c is linux specific. Jira EVLR-2364 Change-Id: Ice9782fa96c256f1b70320886d3720ab0db26244 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649943 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove vgpu_locked_gmmu_map()Richard Zhao2018-02-27
| | | | | | | | | | | | | | The function is not used anymore. Change-Id: Iad99811e2d356362d16b961464729f5169c36f28 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649937 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move tegra_vgpu.h to include/nvgpu/vgpu/Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | tegra_vgpu.h is os agnostic, so move it out of linux folder. Jira EVLR-2364 Change-Id: Ibbe8923f7af036b3b6730f682f5243ca73810f7b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649936 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add ivm wrappersRichard Zhao2018-02-27
| | | | | | | | | | | | Added vgpu_ivm_*() functions to be used by os agnostic code. Jira EVLR-2364 Change-Id: I4a2baebcff9723950c4fba99d0879a0c61e3e3a2 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649935 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get a syncpointDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose per-channel allocated syncpoint to user space API will also return current value of the syncpoint On supported platforms, this API will also return a RW semaphore address (corresponding to syncpoint shim) to user space Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate support for this new API Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver Set this flag for GV11B and GP10B for now Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA address of a syncpoint Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX value of syncpoint Bug 200326065 Jira NVGPU-179 Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: nvlink endpoint driverThomas Fleury2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The following changes implements the initial (as per bringup) nvlink driver. (1) SW initialization of nvlink core driver structures (2) Nvlink interrupt handling (3) Device initialization (IOCTRL, pll and clocks, device level intr) (4) Falcon support for minion (5) Minion load and bootstrapping (6) Link initialization and DL PROD settings (7) Device Interface init (and switching HSHUB to nvlink) (8) HS set/get mode for both link and sublink (9) Topology discovery and VBIOS settings. (10) Ensures we get physical contiguous memory when Nvlink is enabled This driver includes a hack for the current single dev/single link limitation. JIRA: EVLR-2331 JIRA: EVLR-2330 JIRA: EVLR-2329 JIRA: EVLR-2328 Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: update registersThomas Fleury2018-02-26
| | | | | | | | | | | | | | Update GV100 registers for nvlink. JIRA EVLR-2328 Change-Id: I0fad01560022d979fbdcd94fd066e507691969ae Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656052 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract kernel_restart()Alex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | This function is used in gk20a.c to handle catastrophic error conditions but is Linux specific. As such, implement an abstraction for this in driver_common.c and expose the API in nvgpu_common.h. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: Ie2e417d30af5ff7db76f4d2d5b97ec96c386bd04 Reviewed-on: https://git-master.nvidia.com/r/1662543 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add <nvgpu/types.h> to comptags.cAlex Waterman2018-02-16
| | | | | | | | | | | | | | | Problem exposed by user-space nvgpu: <nvgpu/comptags.h> needs to include <nvgpu/types.h> since it used u32, etc. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: I8718964502b2e4c7540bce2ec82bdcac2aff5091 Reviewed-on: https://git-master.nvidia.com/r/1658299 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete nvgpu_semaphore_int listKonsta Holtta2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The hw semas in a sema pool are stored in a list. All elements in this list are freed in a loop when a semaphore pool is destroyed. However, each hw sema is always owned by a channel, and each such channel frees its hw sema during channel closure before putting a ref to the VM which holds a ref to the sema pool, so the lifetime of all the hw semas is shorter than that of the pool and this list is always empty when freeing the pool. Delete the list and this freeing loop. Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is never accessed. Jira NVGPU-512 Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653540 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use #define for log masksTerje Bergstrom2018-02-09
| | | | | | | | | | | | | | | | | | Log masks are a bitmask, and passed as u32 through the API calls. They were still defined as enums, which causes unnecessary implicit conversions. Convert the log masks to be defined as u32. JIRA NVGPU-52 Change-Id: I4b20f0ad2a9f18056502940ea677b3ea8526d830 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649816 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add vpr flag in gpu characteristicsAparna Das2018-02-08
| | | | | | | | | | | | | | | | | | | VPR is currently not supported in virtualized configuration. Allow reporting VPR capability in gpu characteristics Jira EVLR-2236 Change-Id: Id61a0045577e4add0d9cdfddcefcedd5b20eb1dd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639798 (cherry picked from commit 4136b74fd4435966ee2e69ec88fb66424382a7c0) Reviewed-on: https://git-master.nvidia.com/r/1640712 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add characteristic flag for syncpoint address supportDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Define NVGPU_SUPPORT_SYNCPOINT_ADDRESS for core driver book keeping Set this flag for both GV100 and GV11B since Xavier SoC supports a semaphore GPU_VA address for a syncpoint through syncpoint SHIM Bug 200327559 Change-Id: I1f31673c9fd59f493d0b35a80d23151fc063ae06 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649364 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable SWDX spill buffer invalidatesSami Kiminki2018-02-06
| | | | | | | | | | | | | | | | Disable SWDX spill buffer invalidates as is required by HW. Since this register is context-switched, add these in the GR init sequence. Bug 2040262 Change-Id: I0be10d12516bce6ce6f8fb0e8af5b67f8af92257 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1650563 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable SCC pagepool invalidatesSami Kiminki2018-02-06
| | | | | | | | | | | | | | | | Disable SCC pagepool invalidates as is required by HW. Since this register is context-switched, add these in the GR init sequence. Bug 2040262 Change-Id: I8dd1b7c7c4b0544878ca57b1261f9c85fa380d47 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649719 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: add scg support info in gpu characteristicsseshendra Gadagottu2018-02-02
| | | | | | | | | | | | | Indicated support for Simultaneous Compute and Graphics(SCG) in gpu characteristics for gv11b. Bug 2053932 Change-Id: I788e22242083dff775dd4cc5b9aa73c938028536 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup usage of bypass_smmuAlex Waterman2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GPU has multiple different operating modes in respect to IOMMU'ability. As such there needs to be a clean way to tell the driver whether it is IOMMU'able or not. This state also does not always reflect what is possible: all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it wants to. The nvgpu_iommuable() API has now existed for a little while which is a useful way to convey whether nvgpu should consider the GPU as IOMMU'able. However, there is also the g->mm.bypass_smmu flag which used to be able to override what the GPU decided it should do. Typically it was assigned the same value as nvgpu_iommuable() but that was not necessarily a requirment. This patch removes all the usages of g->mm.bypass_smmu and instead uses the nvgpu_iommuable() function. All places where the check against g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code should now be much cleaner. Subsequently other checks can also be placed in the nvgpu_iommuable() function. For example, when NVLINK comes online and the GPU should no longer consider DMA addresses and instead use scatter-gather lists directly the ngpu_iommuable() function will be able to check the state of NVLINK and then act accordingly. Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648332 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add tracking of dma_buf_attachmentTerje Bergstrom2018-02-01
| | | | | | | | | | | | | | | | | VM and CDE code assumes that dma_buf_attachment is stored as a pointer in the private dma_buf_drvdata, so it is not tracked. In Linux trees without dma_buf_*_drvdata() support this is not true, so change the code to explicitly track dma_buf_attachment. JIRA NVGPU-4 Change-Id: I692f05a19a6469195d5444a7e5ff6e92f77ae272 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648004 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable more gr exceptionsSeema Khowala2018-01-31
| | | | | | | | | | | | | | | | -pd, scc, ds, ssync, mme and sked exceptions are enabled. This will be useful for debugging -Handle enabled interrupts -Add gr ops to handle ssync hww. For legacy chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr. Since ssync hww is not enabled on legacy chips, added ssync hww exception handling for volta only. Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644751 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fecs_host_int_enable halSeema Khowala2018-01-31
| | | | | | | | | | | This will be used to enable fecs interrupts per chip. Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642554 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable cbm alpha/beta cache invalidatesseshendra Gadagottu2018-01-31
| | | | | | | | | | | | | | | | | | | | | Disabled CBM alpha and beta cache invalidates as required by hw. Since these registers are context switched out, added these invalidates as part of gr init sequence, so golden context restore these settings for all contexts. Bug 2040262 Change-Id: Iffdd03f2ac6440ddd615899c407cfee692460918 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648948 Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Tested-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Chris Dragan <kdragan@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add vgpu_ivc_* wrappersRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | tegra_gr_comm_* are wrapped as vgpu_ivc_*, which helps make vgpu code more common. Jira EVLR-2364 Change-Id: Id49462ed6c176c73ceee8c6bc41104447748e187 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645656 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Initial Nvlink driver skeletonDavid Nieto2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Adds the skeleton and integration of the GV100 endpoint driver to NVGPU (1) Adds a OS abstraction layer for the internal nvlink structure. (2) Adds linux specific integration with Nvlink core driver. (3) Adds function pointers for nvlink api, initialization and isr process. (4) Adds initial support for minion. (5) Adds new GPU enable properties to handle NVLINK presence (6) Adds new GPU enable properties for SG_PHY bypass (required for NVLINK over PCI) (7) Adds parsing of nvlink vbios structures. (8) Adds logging defines for NVGPU JIRA: EVLR-2328 Change-Id: I0720a165a15c7187892c8c1a0662ec598354ac06 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: gpu: Add speculation barrier macroAlex Waterman2018-01-25
| | | | | | | | | | | | | Provide a macro for preventing CPU speculation. bug 2039126 CVE-2017-5753 Change-Id: Ifa936c079d9f2a0231d0cf35c4d8bdd18d54b238 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640497 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup '\n' usage in allocator debuggingAlex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | These '\n' were leftover from the previous debugging macro usage which did no add the '\n' automagically. However, once swapped over to the nvgpu logging system the '\n' is added and no longer needs to be present in the code. This did require one extra modification though to keep things consistent. The __alloc_pstat() macro, used for sending output either to a seq_file or the terminal, needed to add the '\n' for seq_printf() calls and the '\n' had to be deleted in the C files. Change-Id: I4d56317fe2a87bd00033cfe79d06ffc048d91049 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613641 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: BOOTSTRAP_GR_FALCONS using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_bootstrap_gr_falcons struct - gv100_load_falcon_ucode() function to bootstrap GR flacons using RPC, wait for INIT_WPR_REGION before creating & executing BOOTSTRAP_GR_FALCONS RPC. - Added code to handle BOOTSTRAP_GR_FALCONS ack in RPC handler Change-Id: If70dc75bb2789970382853fb001d970a346b2915 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613316 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: INIT WPR region using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_init_wpr_region struct - Function gv100_pmu_init_acr() to create & execute INIT_WPR_REGION using RPC. - Updated gv100 HAL .init_wpr_region to point to gv100_pmu_init_acr() - Added code to handle INIT_WPR_REGION ack in RPC handler. Change-Id: I699fa945790689e5f24ad5d3de022efb458662e0 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613290 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: PMU f/w updateMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | | | | | | | | | -Added new version of pmu init msg "pmu_init_msg_pmu_v5" -created methods to support new pmu init message parameter read based on f/w version for below ops. .get_pmu_msg_pmu_init_msg_ptr .get_pmu_init_msg_pmu_sw_mg_off .get_pmu_init_msg_pmu_sw_mg_size -Corrected PMU_DMEM_ALLOC_ALIGNMENT value to 32 bit to allocate PMU DMEM space for nvgpu -Updated PMU version of GV100/APP_VERSION_BIGGPU to 23440730 & PMU ucode CL is https://git-master.nvidia.com/r/#/c/1642432/ Change-Id: Ib1e0197b5f3a229a601e810c9c0d93f05b9d69e7 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642229 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable idle clock slowdownseshendra Gadagottu2018-01-24
| | | | | | | | | | | | | | | | | | Updated thermal settings as per hw POR update: - Disabled idle clock slowdown - Updated therm_grad_stepping1_pdiv_duration as per updated hw por value. Bug 200365110 Change-Id: I0c67366ecebd5681343746e9badb57fa74dfaeaa Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643895 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove fault_buf_status arrayAlex Waterman2018-01-24
| | | | | | | | | | | | Now that we have a consistent way to check if a mem allocation is valid this array is not necessary. The code can simply check the validity of the nvgpu_mem. Change-Id: I6aaf563ddc314cf86a2c2b98f7eb75fa7a9a1ad9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641637 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU code cleanupMahantesh Kumbar2018-01-23
| | | | | | | | | | | | | | | | | | | | -removed unsupported PMU f/w version defines & corrected naming specific to chip -removed unsupported PMU f/w version methods which are not useful for existing ucode. -removed unsupported PMU interface which are not useful for existing ucode Change-Id: I17933ff656f48a888e049d680f108b2ef7537439 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643399 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fold T19x code back to main code pathsTerje Bergstrom2018-01-23
| | | | | | | | | | | | | Lots of code paths were split to T19x specific code paths and structs due to split repository. Now that repositories are merged, fold all of them back to main code paths and structs and remove the T19x specific Kconfig flag. Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640606 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add sw method for SET_BES_CROP_DEBUG4seshendra Gadagottu2018-01-22
| | | | | | | | | | | | | | | | | | Added sw method support for SET_BES_CROP_DEBUG4. In this sw method: CLAMP_FP_BLEND_TO_MAXVAL forces overflow and CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval. Added support for this sw method in gp10b/gp106/gv11b and gv100. Bug 2046636 Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641529 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update gk20a pde bit coverage functionAlex Waterman2018-01-19
| | | | | | | | | | | | | | | | | The mm_gk20a.c function that returns number of bits that a PDE covers is very useful for determing PDE size for all chips. Copy this into the common VM code since this applies to all chips/platforms. Bug 200105199 Change-Id: I437da4781be2fa7c540abe52b20f4c4321f6c649 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639730 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Enable perfmon.Deepak Goyal2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | t19x PMU ucode uses RPC mechanism for PERFMON commands. - Declared "pmu_init_perfmon", "pmu_perfmon_start_sampling", "pmu_perfmon_stop_sampling" and "pmu_perfmon_get_samples" in pmu ops to differenciate for chips using RPC & legacy cmd/msg mechanism. - Defined and used PERFMON RPC commands for t19x - INIT - START - STOP - QUERY - Adds RPC handler for PERFMON RPC commands. - For guerying GPU utilization/load, we need to send PERFMON_QUERY RPC command for gv11b. - Enables perfmon for gv11b. Bug 2039013 Change-Id: Ic32326f81d48f11bc772afb8fee2dee6e427a699 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614114 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: RPC interface supportMahantesh Kumbar2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created nv_pmu_rpc_cmd & nv_pmu_rpc_msg struct, & added member rpc under pmu_cmd & pmu_msg - Created RPC header interface - Created RPC desc struct & added as member to pmu payload - Defined PMU_RPC_EXECUTE() to convert different RPC request to make generic RPC call. - nvgpu_pmu_rpc_execute() function to execute RPC request by creating required RPC payload & send request to PMU to execute. - nvgpu_pmu_rpc_execute() function as default callback handler for RPC if caller not provided callback - Modified nvgpu_pmu_rpc_execute() function to include check of RPC payload parameter. - Modified nvgpu_pmu_cmd_post() function to handle RPC payload request. JIRA GPUT19X-137 Change-Id: Iac140eb6b98d6bae06a089e71c96f15068fe7e7b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613266 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ramin_big_page_size default val is set to 64kbSeema Khowala2018-01-18
| | | | | | | | | | | | | | | | | | | | | | | | -MMU_CTRL_USE_PDB_BIG_PAGE_SIZE is set to TRUE and hence RAMIN_BIG_PAGE_SIZE should be set to 64KB i.e. val 1. By default this is set to 128KB i.e. val 0. -This change will also fix an issue where perfbuffer_enable and nvgpu_init_hwpm function pass 0 as big page size while initializing inst_block and due to which ramin_big_page_size does not get updated to 64KB and remains set to unsupported 128KB value. -Volta supports 64KB for big pages. Selecting 128KB for big pages results in an UNBOUND_INSTANCE fault. Bug 200327596 Change-Id: Ie304e4e5ff7bedaead27e9380d64c59013dd64ca Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639540 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv1xx: remove scg_type from channel infoseshendra Gadagottu2018-01-18
| | | | | | | | | | | | | scg_type for graphics_compute0 and compute1 is deprecated for gv1xx. Remove it from setting in the channel info. Bug 1842197 Change-Id: I37354adcd82bb0ab648e0f04d47de796b79f91cd Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640440 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle SM reported MMU_NACK exceptionDeepak Nibade2018-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upon receiving MMU_FAULT error, MMU will forward MMU_NACK to SM If MMU_NACK is masked out, SM will simply release the semaphores And if semaphores are released before MMU fault is handled, user space could see that operation as successful incorrectly Fix this by handling SM reported MMU_NACK exception Enable MMU_NACK reporting in gv11b_gr_set_hww_esr_report_mask In MMU_NACK handling path, we just set the error notifier and clear the interrupt so that the User Space sees the error as soon as semaphores are released by SM And MMU_FAULT handling path will take care of triggering RC recovery anyways Also add necessary h/w accessors for mmu_nack Bug 2040594 Jira NVGPU-473 Change-Id: Ic925c2d3f3069016c57d177713066c29ab39dc3d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631708 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Free enabled flags on driver unloadAlex Waterman2018-01-10
| | | | | | | | | | | | Make sure the enabled flags are freed before the driver unloads. Bug 200369180 Change-Id: Ibac9ee61ca99bdfda03d76e393c7cd6cb6cc299a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1632752 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allocate from coherent poolDavid Nieto2018-01-08
| | | | | | | | | | | | | | | | | | | | Maps memory coherently on devices that are connected to a coherent bus. (1) Add code to be able to get the platform device node. (2) Create a new flag to mark if the device is connected to a coherent bus (3) Map memory coherently on coherent devices. bug 2040331 Change-Id: Ide83a9261acdbbc6e9fef4fc5f38d6f9d0e5ab5b Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1633985 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>