summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common
Commit message (Collapse)AuthorAge
* gpu: nvgpu: support user fence updatesDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for user fence updates i.e. increments added by user space in pushbuffer directly Add a submit IOCTL flag NVGPU_SUBMIT_GPFIFO_FLAGS_USER_FENCE_UPDATE to indicate if User has added increments in pushbuffer If yes, number_of_increment value is received in fence.value from User If User is adding increments in the pushbuffer then we don't need to do any job tracking in the kernel So fail the submit if we evaluate need_job_tracking to true and FLAGS_USER_FENCE_UPDATE is set User is responsible for ensuring all pre-requisites for a fast submit and to prevent kernel job tracking Since user space adds increments in the pushbuffer, just handle the threshold book keeping in kernel. Bug 200326065 Jira NVGPU-179 Change-Id: Ic0f0b1aa69e3389a4c3305fb6a559c5113719e0f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661854 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get a syncpointDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose per-channel allocated syncpoint to user space API will also return current value of the syncpoint On supported platforms, this API will also return a RW semaphore address (corresponding to syncpoint shim) to user space Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate support for this new API Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver Set this flag for GV11B and GP10B for now Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA address of a syncpoint Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX value of syncpoint Bug 200326065 Jira NVGPU-179 Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: nvlink endpoint driverThomas Fleury2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The following changes implements the initial (as per bringup) nvlink driver. (1) SW initialization of nvlink core driver structures (2) Nvlink interrupt handling (3) Device initialization (IOCTRL, pll and clocks, device level intr) (4) Falcon support for minion (5) Minion load and bootstrapping (6) Link initialization and DL PROD settings (7) Device Interface init (and switching HSHUB to nvlink) (8) HS set/get mode for both link and sublink (9) Topology discovery and VBIOS settings. (10) Ensures we get physical contiguous memory when Nvlink is enabled This driver includes a hack for the current single dev/single link limitation. JIRA: EVLR-2331 JIRA: EVLR-2330 JIRA: EVLR-2329 JIRA: EVLR-2328 Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract kernel_restart()Alex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | This function is used in gk20a.c to handle catastrophic error conditions but is Linux specific. As such, implement an abstraction for this in driver_common.c and expose the API in nvgpu_common.h. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: Ie2e417d30af5ff7db76f4d2d5b97ec96c386bd04 Reviewed-on: https://git-master.nvidia.com/r/1662543 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add missing log2 header includeAlex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | These two files (common/mm/vm.c and common/as.c) both used functions defined in log2.h but do not include log2.h. This went unnoticed in nvgpu on Tegra, but are an issue for POSIX. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: I09250f6928f5cb26bb6b7fbdae13cb703bd8f27b Reviewed-on: https://git-master.nvidia.com/r/1662541 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup map attributes debuggingAlex Waterman2018-02-22
| | | | | | | | | | | Make the map attributes printed by map debug code are more easily readable and consistent. Change-Id: I9737131a2ea44c6a080dff0095929760888b83ae Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654518 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: When NVLINK is enabled use phys addressesAlex Waterman2018-02-21
| | | | | | | | | | | | | When NVLINK is enabled we need to use phys addresses from the SGT since NVLINK bypasses the SMMU. JIRA EVLR-2333 Change-Id: Ibfc0454fa7616056761f8626f2a611749775d091 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654561 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: POSIX does not have a strlcpyAlex Waterman2018-02-16
| | | | | | | | | | | | | | | | | | So don't use it in common code. This could be implemented in common code but it would just be a wrapper around strncpy() most likely since we aren't going to maintain low level (possibly asm) implementations of APIs. NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: If446589cd1736456184daa75ae539c4ce332b741 Reviewed-on: https://git-master.nvidia.com/r/1658300 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add characteristic flag for syncpoint address supportLakshmanan M2018-02-16
| | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Bug 200327559 Change-Id: I20f532e22c29d1adaff0fbc4204e36cc8455e572 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657983 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Jitendra Pratap Singh Chauhan <jchauhan@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Allow disabling CDE functionalityTerje Bergstrom2018-02-15
| | | | | | | | | | | | | CDE is a Tegra SoC specific feature. Add new config option CONFIG_NVGPU_SUPPORT_CDE and #ifdef all CDE specific code with it. JIRA NVGPU-4 Change-Id: I6f0b0047d6ba2b5c36c2eb9b8a1514776741f5b5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648002 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: limit min freq to 216.75Mhzseshendra Gadagottu2018-02-15
| | | | | | | | | | | | | | | | | Until issue related to low frequencies root caused, limit min frequency to known safe value: 216.75Mhz. This change needs to be reverted, once orginal issue root-caused and fixed. Bug 2051863 Bug 2056266 Change-Id: If6e56f59ee5fa06967fde1128b58a7fc97be74e9 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657595 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove the use of READ_ONLY for DMA APITerje Bergstrom2018-02-15
| | | | | | | | | | | | | | READ_ONLY flag for dma API is a Tegra specific API. We use it only to prevent accidental writes to non-secure ACR bootloader. Its use is marginal, so remove the flag. JIRA NVGPU-4 Change-Id: I887dc04aee8f7ace40220294851b210375dfde98 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648174 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use preallocated VPR bufferTerje Bergstrom2018-02-15
| | | | | | | | | | | | To prevent deadlock while allocating VPR in nvgpu, allocate all the needed VPR memory at probe time and use an internal allocator to hand out space for VPR buffers. Change-Id: I584b9a0f746d5d1dec021cdfbd6f26b4b92e4412 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655324 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete nvgpu_semaphore_int listKonsta Holtta2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The hw semas in a sema pool are stored in a list. All elements in this list are freed in a loop when a semaphore pool is destroyed. However, each hw sema is always owned by a channel, and each such channel frees its hw sema during channel closure before putting a ref to the VM which holds a ref to the sema pool, so the lifetime of all the hw semas is shorter than that of the pool and this list is always empty when freeing the pool. Delete the list and this freeing loop. Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is never accessed. Jira NVGPU-512 Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653540 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable elpgseshendra Gadagottu2018-02-13
| | | | | | | | | | | | | | | | Enabled Engine Level Power Gating for gv11b. Bug 2051863 Change-Id: I59a51dbe8fa9f13e4b8be03f02e1571093fdaeb0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646322 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: call nvgpu_init_mm_vars just after probeSeema Khowala2018-02-09
| | | | | | | | | | | | | | | It is good to init mm vars right after probe as driver is heavily dependent on enabled flags for all kinds of memory related needs Change-Id: I62ca280ff9240649798faa34767f7dc9ea3c0db1 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649724 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Fix CBC base calculusSami Kiminki2018-02-09
| | | | | | | | | | | | | | On GV11B, CBC base is calculated in similar fashion than it's calculated on dGPUs. Thus, remove gv11b_ltc_cbc_fix_config() as it would incorrectly multiply the CBC base by the LTC count. Bug 2054860 Change-Id: Iaed717161547468c17e12236149d970c497885b3 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654506 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle pm_prepare_poweroff failureseshendra Gadagottu2018-02-09
| | | | | | | | | | | | | | As part of gk20a_pm_prepare_poweroff, gpu hw state is destroyed even in case of any errors. So try to recover from that situation by calling gk20a_pm_finalize_poweron. Bug 200380708 Change-Id: Ibff656cda67241ad111fd22701e05871f20d6f70 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653750 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rely on own dma attribute handlingTerje Bergstrom2018-02-08
| | | | | | | | | | | | | | Tegra kernel abstracts Linux 4.4 vs Linux 4.9 differences from drivers. Upstream kernel does not provide that facility, so add nvgpu internal way of dealing with the differences. JIRA NVGPU-4 Change-Id: I8289fdcf98873de14398bffc808d89a675f2aa15 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648160 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add vpr flag in gpu characteristicsAparna Das2018-02-08
| | | | | | | | | | | | | | | | | | | VPR is currently not supported in virtualized configuration. Allow reporting VPR capability in gpu characteristics Jira EVLR-2236 Change-Id: Id61a0045577e4add0d9cdfddcefcedd5b20eb1dd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639798 (cherry picked from commit 4136b74fd4435966ee2e69ec88fb66424382a7c0) Reviewed-on: https://git-master.nvidia.com/r/1640712 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add characteristic flag for syncpoint address supportDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Define NVGPU_SUPPORT_SYNCPOINT_ADDRESS for core driver book keeping Set this flag for both GV100 and GV11B since Xavier SoC supports a semaphore GPU_VA address for a syncpoint through syncpoint SHIM Bug 200327559 Change-Id: I1f31673c9fd59f493d0b35a80d23151fc063ae06 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649364 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: add scg support info in gpu characteristicsseshendra Gadagottu2018-02-02
| | | | | | | | | | | | | Indicated support for Simultaneous Compute and Graphics(SCG) in gpu characteristics for gv11b. Bug 2053932 Change-Id: I788e22242083dff775dd4cc5b9aa73c938028536 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup usage of bypass_smmuAlex Waterman2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GPU has multiple different operating modes in respect to IOMMU'ability. As such there needs to be a clean way to tell the driver whether it is IOMMU'able or not. This state also does not always reflect what is possible: all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it wants to. The nvgpu_iommuable() API has now existed for a little while which is a useful way to convey whether nvgpu should consider the GPU as IOMMU'able. However, there is also the g->mm.bypass_smmu flag which used to be able to override what the GPU decided it should do. Typically it was assigned the same value as nvgpu_iommuable() but that was not necessarily a requirment. This patch removes all the usages of g->mm.bypass_smmu and instead uses the nvgpu_iommuable() function. All places where the check against g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code should now be much cleaner. Subsequently other checks can also be placed in the nvgpu_iommuable() function. For example, when NVLINK comes online and the GPU should no longer consider DMA addresses and instead use scatter-gather lists directly the ngpu_iommuable() function will be able to check the state of NVLINK and then act accordingly. Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648332 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove init_state initialization codeTejal Kudav2018-02-02
| | | | | | | | | | | | | | | | | | nvlink core library no longer exposes the set_init_state() interface as it wishes to block init_state changes from endpoint drivers. Now, the core driver is responsible for initializing init_state variables using set_init_state() interface. Hence, we remove this redundant code. Change-Id: I81c4922cf48f7918e69795579b39b7fa0c299644 Signed-off-by: Tejal Kudav <tkudav@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646437 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Adeel Raza <araza@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Set DMA mask to 34 bitsAlex Waterman2018-02-01
| | | | | | | | | | | | | | | | | | Set the DMA mask to 34 bits so that large DMA allocs can be done. Currently the DMA mask is left unset which limits the size of the maximum DMA allocation to 32 bits. The 34 bit mask was chosen because it works for all chips (even gm20b supports 34 bit physical addresses). However, newer chips could use larger masks in the future if they desire. Bug 200377221 Change-Id: Iaa0543f77ff4e2bd6616f38e4464240375bb37b6 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641762 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable devfreqseshendra Gadagottu2018-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | After moving devfreq enable to end of finalize power on, intermittent issues related to gpu booting with devfreq enabled are fixed. Enabled devfreq for gv11b by enabling ""nvhost_podgov" governor in platform data. Reused scaling functions from gp10b/gk20a. Removed emc floor on railgate for power saving. Added max emc frequency as floor in rail-ungate for faster gpu boot. Bug 2049965 Bug 2039013 Bug 200377508 Change-Id: Ia1dec278b663b9f7ed859dd953a60f3eae7ef9a0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644702 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add tracking of dma_buf_attachmentTerje Bergstrom2018-02-01
| | | | | | | | | | | | | | | | | VM and CDE code assumes that dma_buf_attachment is stored as a pointer in the private dma_buf_drvdata, so it is not tracked. In Linux trees without dma_buf_*_drvdata() support this is not true, so change the code to explicitly track dma_buf_attachment. JIRA NVGPU-4 Change-Id: I692f05a19a6469195d5444a7e5ff6e92f77ae272 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648004 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove NVGPU_IOCTL_GET_BUFFER_INFOTerje Bergstrom2018-01-31
| | | | | | | | | | | | | | | | | | | The IOCTL was introduced for making efficient query of buffer identity and size. It was never taken into use, and it adds a dependency to Tegra specific dma_buf API, so remove it. JIRA NVGPU-4 Change-Id: I194d7bb1f54997900a3be8d39c93331befa225c7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648001 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enable devfreq for silicon onlyseshendra Gadagottu2018-01-31
| | | | | | | | | | | | | | | | | | | | gpu frequency scaling is available only on silicon platforms. Added check for silicon platform before enabling scaling init. Bug 2049965 Bug 2039013 Bug 200377508 Change-Id: Ie780147cee904137e4618e17162e5cedba4987ee Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642529 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enable devfreq after finalize poweronseshendra Gadagottu2018-01-31
| | | | | | | | | | | | | | | | | | | Enabling gpu scaling driver after finalize poweron, will make gpu booting happen at initially set frequency(1GHz). Also doing platform specific init scale after enabling scaling driver. Bug 2049965 Bug 2039013 Bug 200377508 Change-Id: I633f8f5a25d9de18cbb3a022913b8b725ccd87e5 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644703 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Unify querying stream idTerje Bergstrom2018-01-31
| | | | | | | | | | | | | | Stream ID for gp10b is retrieved directly from DT headers in common code. Introduce instead a variable to store the stream ID and move the query to platform_gp10b_tegra.c. JIRA NVGPU-4 Change-Id: I123024e13e470283bb691883f8f963eb72c997d8 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648013 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: fix resource leakAingara Paramakuru2018-01-31
| | | | | | | | | | | | | gr_ctx->tsgid needs to be set to ensure that the GR ctx free sequence will target the correct TSG's GR ctx. Bug 200341631 Change-Id: I83c57597f10ce3af572f114d28312376cea55c2a Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable clock gating featuresseshendra Gadagottu2018-01-31
| | | | | | | | | | | | | | | Enable ELCG, BLCG and SLCG features. Bug 2051863 Change-Id: Id2c67c94c7b2dd0517d4ee4b0280aeb19f3fe35a Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646302 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add vgpu_ivc_* wrappersRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | tegra_gr_comm_* are wrapped as vgpu_ivc_*, which helps make vgpu code more common. Jira EVLR-2364 Change-Id: Id49462ed6c176c73ceee8c6bc41104447748e187 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645656 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make .tsg_unbind_channel one layer lowerRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | The message to tell RM server to unbind channel has to be sent after client unbinds the channel and before client calls tsg release. The channel has to belong to a tsg on RM server before client submit a runlist to remove the channel. Or there's a bare channel problem. By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel() will be common functions for all chip, and it'll call tsg release after call .tsg_unbind_channel. So vgpu won't need to worry about tsg was released before sending msg to RM server. Bug 200382695 Bug 200382785 Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645383 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: cause early VPR resize for gv11bDeepak Nibade2018-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | Patch 7240b3c2 enabled secure allocation for gv11b But since we allocate secure buffers in poweron path, and secure allocation needs GPU to be in off state, this results in deadlock in poweron path To solve this, we already cause early VPR resize for older chips by calling gk20a_tegra_secure_page_alloc() from late_probe Implement same for gv11b. Add late_probe callback and add a call to gk20a_tegra_secure_page_alloc() Bug 2038249 Change-Id: I8c17b069962b26edbd0639a7c0d6c2fdaa352935 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648831 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seema Khowala <seemaj@nvidia.com>
* gpu: nvgpu: vgpu: remove virt_ctx from tegra_gr_commRichard Zhao2018-01-26
| | | | | | | | | | | | | | | | | | | queue index can already index the queues. It also help make the api more common. Jira EVLR-2364 Change-Id: I98a5014ba0510a2687fdf096a160c497bd1f6985 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646197 Reviewed-by: Damian Halas <dhalas@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ce: store fences in a separate arrayKonsta Holtta2018-01-26
| | | | | | | | | | | | | | | | | | | | Simplify the copyengine code massively by storing the job post fence pointers in an array of fences instead of mixing them up in the command buffer memory. The post fences are used when the ring buffer of a context gets full and we need to wait for the oldest slot to free up. NVGPU-43 NVGPU-52 Change-Id: I36969e19676bec0f38de9a6357767a8d5cbcd329 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646037 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ce: drop prefence supportKonsta Holtta2018-01-26
| | | | | | | | | | | | | | | | | Delete the gk20a_fence_in argument in gk20a_ce_execute_ops. It has never been used and is in the way of some upcoming code cleanup. NVGPU-43 Change-Id: Ie61e1a2f4945b1e34d64880044c265d26fa822d7 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646036 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enable IO coherence characteristics for dGPUsDeepak Nibade2018-01-26
| | | | | | | | | | | | | Enable NVGPU_SUPPORT_IO_COHERENCE characteristics for dGPUs which support DMA_COHERENCE e.g. GV100 Bug 200383034 Change-Id: If12d2ef6c642f7c4cce83dbf05f492100ee1c7e0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644277 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Initial Nvlink driver skeletonDavid Nieto2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Adds the skeleton and integration of the GV100 endpoint driver to NVGPU (1) Adds a OS abstraction layer for the internal nvlink structure. (2) Adds linux specific integration with Nvlink core driver. (3) Adds function pointers for nvlink api, initialization and isr process. (4) Adds initial support for minion. (5) Adds new GPU enable properties to handle NVLINK presence (6) Adds new GPU enable properties for SG_PHY bypass (required for NVLINK over PCI) (7) Adds parsing of nvlink vbios structures. (8) Adds logging defines for NVGPU JIRA: EVLR-2328 Change-Id: I0720a165a15c7187892c8c1a0662ec598354ac06 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add speculative load barrier (sched IOCTLs)Alex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Data can be speculatively loaded from memory and stay in cache even when bound check fails. This can lead to unintended information disclosure via side-channel analysis. To mitigate this problem insert a speculation barrier. bug 2039126 CVE-2017-5753 Change-Id: Iec23eb75ce2a9251c8a5c8cbdd21a32910e1a71a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640502 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add speculative load barrier (dbg IOCTLs)Alex Waterman2018-01-25
| | | | | | | | | | | | | | | | | Data can be speculatively loaded from memory and stay in cache even when bound check fails. This can lead to unintended information disclosure via side-channel analysis. To mitigate this problem insert a speculation barrier. bug 2039126 CVE-2017-5753 Change-Id: I982225e754cc5d430c19f4cc542302e52243bd38 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640501 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add speculative load barrier (VM ioctls)Alex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Data can be speculatively loaded from memory and stay in cache even when bound check fails. This can lead to unintended information disclosure via side-channel analysis. To mitigate this problem insert a speculation barrier. bug 2039126 CVE-2017-5753 Change-Id: Idf09b8d64dbdc2b0e4b504d4d7ea0197d38157d3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640499 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add speculative load barrier (channel IOCTLs)Alex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Data can be speculatively loaded from memory and stay in cache even when bound check fails. This can lead to unintended information disclosure via side-channel analysis. To mitigate this problem insert a speculation barrier. bug 2039126 CVE-2017-5753 Change-Id: I6b8af794ea2156f0342ea6cc925051f49dbb1d6e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640498 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup '\n' usage in allocator debuggingAlex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | These '\n' were leftover from the previous debugging macro usage which did no add the '\n' automagically. However, once swapped over to the nvgpu logging system the '\n' is added and no longer needs to be present in the code. This did require one extra modification though to keep things consistent. The __alloc_pstat() macro, used for sending output either to a seq_file or the terminal, needed to add the '\n' for seq_printf() calls and the '\n' had to be deleted in the C files. Change-Id: I4d56317fe2a87bd00033cfe79d06ffc048d91049 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613641 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: BOOTSTRAP_GR_FALCONS using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_bootstrap_gr_falcons struct - gv100_load_falcon_ucode() function to bootstrap GR flacons using RPC, wait for INIT_WPR_REGION before creating & executing BOOTSTRAP_GR_FALCONS RPC. - Added code to handle BOOTSTRAP_GR_FALCONS ack in RPC handler Change-Id: If70dc75bb2789970382853fb001d970a346b2915 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613316 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: INIT WPR region using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_init_wpr_region struct - Function gv100_pmu_init_acr() to create & execute INIT_WPR_REGION using RPC. - Updated gv100 HAL .init_wpr_region to point to gv100_pmu_init_acr() - Added code to handle INIT_WPR_REGION ack in RPC handler. Change-Id: I699fa945790689e5f24ad5d3de022efb458662e0 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613290 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: PMU f/w updateMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | | | | | | | | | -Added new version of pmu init msg "pmu_init_msg_pmu_v5" -created methods to support new pmu init message parameter read based on f/w version for below ops. .get_pmu_msg_pmu_init_msg_ptr .get_pmu_init_msg_pmu_sw_mg_off .get_pmu_init_msg_pmu_sw_mg_size -Corrected PMU_DMEM_ALLOC_ALIGNMENT value to 32 bit to allocate PMU DMEM space for nvgpu -Updated PMU version of GV100/APP_VERSION_BIGGPU to 23440730 & PMU ucode CL is https://git-master.nvidia.com/r/#/c/1642432/ Change-Id: Ib1e0197b5f3a229a601e810c9c0d93f05b9d69e7 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642229 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>