summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include
Commit message (Collapse)AuthorAge
* gpu: nvgpu: vgpu: move tegra_vgpu.h to include/nvgpu/vgpu/Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | tegra_vgpu.h is os agnostic, so move it out of linux folder. Jira EVLR-2364 Change-Id: Ibbe8923f7af036b3b6730f682f5243ca73810f7b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649936 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add ivm wrappersRichard Zhao2018-02-27
| | | | | | | | | | | | Added vgpu_ivm_*() functions to be used by os agnostic code. Jira EVLR-2364 Change-Id: I4a2baebcff9723950c4fba99d0879a0c61e3e3a2 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649935 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get a syncpointDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose per-channel allocated syncpoint to user space API will also return current value of the syncpoint On supported platforms, this API will also return a RW semaphore address (corresponding to syncpoint shim) to user space Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate support for this new API Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver Set this flag for GV11B and GP10B for now Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA address of a syncpoint Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX value of syncpoint Bug 200326065 Jira NVGPU-179 Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: nvlink endpoint driverThomas Fleury2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The following changes implements the initial (as per bringup) nvlink driver. (1) SW initialization of nvlink core driver structures (2) Nvlink interrupt handling (3) Device initialization (IOCTRL, pll and clocks, device level intr) (4) Falcon support for minion (5) Minion load and bootstrapping (6) Link initialization and DL PROD settings (7) Device Interface init (and switching HSHUB to nvlink) (8) HS set/get mode for both link and sublink (9) Topology discovery and VBIOS settings. (10) Ensures we get physical contiguous memory when Nvlink is enabled This driver includes a hack for the current single dev/single link limitation. JIRA: EVLR-2331 JIRA: EVLR-2330 JIRA: EVLR-2329 JIRA: EVLR-2328 Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: update registersThomas Fleury2018-02-26
| | | | | | | | | | | | | | Update GV100 registers for nvlink. JIRA EVLR-2328 Change-Id: I0fad01560022d979fbdcd94fd066e507691969ae Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656052 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract kernel_restart()Alex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | This function is used in gk20a.c to handle catastrophic error conditions but is Linux specific. As such, implement an abstraction for this in driver_common.c and expose the API in nvgpu_common.h. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: Ie2e417d30af5ff7db76f4d2d5b97ec96c386bd04 Reviewed-on: https://git-master.nvidia.com/r/1662543 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add <nvgpu/types.h> to comptags.cAlex Waterman2018-02-16
| | | | | | | | | | | | | | | Problem exposed by user-space nvgpu: <nvgpu/comptags.h> needs to include <nvgpu/types.h> since it used u32, etc. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: I8718964502b2e4c7540bce2ec82bdcac2aff5091 Reviewed-on: https://git-master.nvidia.com/r/1658299 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete nvgpu_semaphore_int listKonsta Holtta2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The hw semas in a sema pool are stored in a list. All elements in this list are freed in a loop when a semaphore pool is destroyed. However, each hw sema is always owned by a channel, and each such channel frees its hw sema during channel closure before putting a ref to the VM which holds a ref to the sema pool, so the lifetime of all the hw semas is shorter than that of the pool and this list is always empty when freeing the pool. Delete the list and this freeing loop. Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is never accessed. Jira NVGPU-512 Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653540 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use #define for log masksTerje Bergstrom2018-02-09
| | | | | | | | | | | | | | | | | | Log masks are a bitmask, and passed as u32 through the API calls. They were still defined as enums, which causes unnecessary implicit conversions. Convert the log masks to be defined as u32. JIRA NVGPU-52 Change-Id: I4b20f0ad2a9f18056502940ea677b3ea8526d830 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649816 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add vpr flag in gpu characteristicsAparna Das2018-02-08
| | | | | | | | | | | | | | | | | | | VPR is currently not supported in virtualized configuration. Allow reporting VPR capability in gpu characteristics Jira EVLR-2236 Change-Id: Id61a0045577e4add0d9cdfddcefcedd5b20eb1dd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639798 (cherry picked from commit 4136b74fd4435966ee2e69ec88fb66424382a7c0) Reviewed-on: https://git-master.nvidia.com/r/1640712 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add characteristic flag for syncpoint address supportDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Define NVGPU_SUPPORT_SYNCPOINT_ADDRESS for core driver book keeping Set this flag for both GV100 and GV11B since Xavier SoC supports a semaphore GPU_VA address for a syncpoint through syncpoint SHIM Bug 200327559 Change-Id: I1f31673c9fd59f493d0b35a80d23151fc063ae06 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649364 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable SWDX spill buffer invalidatesSami Kiminki2018-02-06
| | | | | | | | | | | | | | | | Disable SWDX spill buffer invalidates as is required by HW. Since this register is context-switched, add these in the GR init sequence. Bug 2040262 Change-Id: I0be10d12516bce6ce6f8fb0e8af5b67f8af92257 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1650563 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable SCC pagepool invalidatesSami Kiminki2018-02-06
| | | | | | | | | | | | | | | | Disable SCC pagepool invalidates as is required by HW. Since this register is context-switched, add these in the GR init sequence. Bug 2040262 Change-Id: I8dd1b7c7c4b0544878ca57b1261f9c85fa380d47 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649719 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: add scg support info in gpu characteristicsseshendra Gadagottu2018-02-02
| | | | | | | | | | | | | Indicated support for Simultaneous Compute and Graphics(SCG) in gpu characteristics for gv11b. Bug 2053932 Change-Id: I788e22242083dff775dd4cc5b9aa73c938028536 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup usage of bypass_smmuAlex Waterman2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GPU has multiple different operating modes in respect to IOMMU'ability. As such there needs to be a clean way to tell the driver whether it is IOMMU'able or not. This state also does not always reflect what is possible: all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it wants to. The nvgpu_iommuable() API has now existed for a little while which is a useful way to convey whether nvgpu should consider the GPU as IOMMU'able. However, there is also the g->mm.bypass_smmu flag which used to be able to override what the GPU decided it should do. Typically it was assigned the same value as nvgpu_iommuable() but that was not necessarily a requirment. This patch removes all the usages of g->mm.bypass_smmu and instead uses the nvgpu_iommuable() function. All places where the check against g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code should now be much cleaner. Subsequently other checks can also be placed in the nvgpu_iommuable() function. For example, when NVLINK comes online and the GPU should no longer consider DMA addresses and instead use scatter-gather lists directly the ngpu_iommuable() function will be able to check the state of NVLINK and then act accordingly. Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648332 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add tracking of dma_buf_attachmentTerje Bergstrom2018-02-01
| | | | | | | | | | | | | | | | | VM and CDE code assumes that dma_buf_attachment is stored as a pointer in the private dma_buf_drvdata, so it is not tracked. In Linux trees without dma_buf_*_drvdata() support this is not true, so change the code to explicitly track dma_buf_attachment. JIRA NVGPU-4 Change-Id: I692f05a19a6469195d5444a7e5ff6e92f77ae272 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648004 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable more gr exceptionsSeema Khowala2018-01-31
| | | | | | | | | | | | | | | | -pd, scc, ds, ssync, mme and sked exceptions are enabled. This will be useful for debugging -Handle enabled interrupts -Add gr ops to handle ssync hww. For legacy chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr. Since ssync hww is not enabled on legacy chips, added ssync hww exception handling for volta only. Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644751 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fecs_host_int_enable halSeema Khowala2018-01-31
| | | | | | | | | | | This will be used to enable fecs interrupts per chip. Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642554 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable cbm alpha/beta cache invalidatesseshendra Gadagottu2018-01-31
| | | | | | | | | | | | | | | | | | | | | Disabled CBM alpha and beta cache invalidates as required by hw. Since these registers are context switched out, added these invalidates as part of gr init sequence, so golden context restore these settings for all contexts. Bug 2040262 Change-Id: Iffdd03f2ac6440ddd615899c407cfee692460918 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648948 Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Tested-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Chris Dragan <kdragan@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add vgpu_ivc_* wrappersRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | tegra_gr_comm_* are wrapped as vgpu_ivc_*, which helps make vgpu code more common. Jira EVLR-2364 Change-Id: Id49462ed6c176c73ceee8c6bc41104447748e187 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645656 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Initial Nvlink driver skeletonDavid Nieto2018-01-25
| | | | | | | | | | | | | | | | | | | | | | Adds the skeleton and integration of the GV100 endpoint driver to NVGPU (1) Adds a OS abstraction layer for the internal nvlink structure. (2) Adds linux specific integration with Nvlink core driver. (3) Adds function pointers for nvlink api, initialization and isr process. (4) Adds initial support for minion. (5) Adds new GPU enable properties to handle NVLINK presence (6) Adds new GPU enable properties for SG_PHY bypass (required for NVLINK over PCI) (7) Adds parsing of nvlink vbios structures. (8) Adds logging defines for NVGPU JIRA: EVLR-2328 Change-Id: I0720a165a15c7187892c8c1a0662ec598354ac06 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1644708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: gpu: Add speculation barrier macroAlex Waterman2018-01-25
| | | | | | | | | | | | | Provide a macro for preventing CPU speculation. bug 2039126 CVE-2017-5753 Change-Id: Ifa936c079d9f2a0231d0cf35c4d8bdd18d54b238 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640497 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup '\n' usage in allocator debuggingAlex Waterman2018-01-25
| | | | | | | | | | | | | | | | | | These '\n' were leftover from the previous debugging macro usage which did no add the '\n' automagically. However, once swapped over to the nvgpu logging system the '\n' is added and no longer needs to be present in the code. This did require one extra modification though to keep things consistent. The __alloc_pstat() macro, used for sending output either to a seq_file or the terminal, needed to add the '\n' for seq_printf() calls and the '\n' had to be deleted in the C files. Change-Id: I4d56317fe2a87bd00033cfe79d06ffc048d91049 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613641 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: BOOTSTRAP_GR_FALCONS using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_bootstrap_gr_falcons struct - gv100_load_falcon_ucode() function to bootstrap GR flacons using RPC, wait for INIT_WPR_REGION before creating & executing BOOTSTRAP_GR_FALCONS RPC. - Added code to handle BOOTSTRAP_GR_FALCONS ack in RPC handler Change-Id: If70dc75bb2789970382853fb001d970a346b2915 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613316 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: INIT WPR region using RPCMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | - Created nv_pmu_rpc_struct_acr_init_wpr_region struct - Function gv100_pmu_init_acr() to create & execute INIT_WPR_REGION using RPC. - Updated gv100 HAL .init_wpr_region to point to gv100_pmu_init_acr() - Added code to handle INIT_WPR_REGION ack in RPC handler. Change-Id: I699fa945790689e5f24ad5d3de022efb458662e0 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613290 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: PMU f/w updateMahantesh Kumbar2018-01-25
| | | | | | | | | | | | | | | | | | | | | | | | -Added new version of pmu init msg "pmu_init_msg_pmu_v5" -created methods to support new pmu init message parameter read based on f/w version for below ops. .get_pmu_msg_pmu_init_msg_ptr .get_pmu_init_msg_pmu_sw_mg_off .get_pmu_init_msg_pmu_sw_mg_size -Corrected PMU_DMEM_ALLOC_ALIGNMENT value to 32 bit to allocate PMU DMEM space for nvgpu -Updated PMU version of GV100/APP_VERSION_BIGGPU to 23440730 & PMU ucode CL is https://git-master.nvidia.com/r/#/c/1642432/ Change-Id: Ib1e0197b5f3a229a601e810c9c0d93f05b9d69e7 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1642229 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: disable idle clock slowdownseshendra Gadagottu2018-01-24
| | | | | | | | | | | | | | | | | | Updated thermal settings as per hw POR update: - Disabled idle clock slowdown - Updated therm_grad_stepping1_pdiv_duration as per updated hw por value. Bug 200365110 Change-Id: I0c67366ecebd5681343746e9badb57fa74dfaeaa Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643895 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove fault_buf_status arrayAlex Waterman2018-01-24
| | | | | | | | | | | | Now that we have a consistent way to check if a mem allocation is valid this array is not necessary. The code can simply check the validity of the nvgpu_mem. Change-Id: I6aaf563ddc314cf86a2c2b98f7eb75fa7a9a1ad9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641637 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: PMU code cleanupMahantesh Kumbar2018-01-23
| | | | | | | | | | | | | | | | | | | | -removed unsupported PMU f/w version defines & corrected naming specific to chip -removed unsupported PMU f/w version methods which are not useful for existing ucode. -removed unsupported PMU interface which are not useful for existing ucode Change-Id: I17933ff656f48a888e049d680f108b2ef7537439 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643399 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fold T19x code back to main code pathsTerje Bergstrom2018-01-23
| | | | | | | | | | | | | Lots of code paths were split to T19x specific code paths and structs due to split repository. Now that repositories are merged, fold all of them back to main code paths and structs and remove the T19x specific Kconfig flag. Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640606 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add sw method for SET_BES_CROP_DEBUG4seshendra Gadagottu2018-01-22
| | | | | | | | | | | | | | | | | | Added sw method support for SET_BES_CROP_DEBUG4. In this sw method: CLAMP_FP_BLEND_TO_MAXVAL forces overflow and CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval. Added support for this sw method in gp10b/gp106/gv11b and gv100. Bug 2046636 Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641529 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update gk20a pde bit coverage functionAlex Waterman2018-01-19
| | | | | | | | | | | | | | | | | The mm_gk20a.c function that returns number of bits that a PDE covers is very useful for determing PDE size for all chips. Copy this into the common VM code since this applies to all chips/platforms. Bug 200105199 Change-Id: I437da4781be2fa7c540abe52b20f4c4321f6c649 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639730 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Enable perfmon.Deepak Goyal2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | t19x PMU ucode uses RPC mechanism for PERFMON commands. - Declared "pmu_init_perfmon", "pmu_perfmon_start_sampling", "pmu_perfmon_stop_sampling" and "pmu_perfmon_get_samples" in pmu ops to differenciate for chips using RPC & legacy cmd/msg mechanism. - Defined and used PERFMON RPC commands for t19x - INIT - START - STOP - QUERY - Adds RPC handler for PERFMON RPC commands. - For guerying GPU utilization/load, we need to send PERFMON_QUERY RPC command for gv11b. - Enables perfmon for gv11b. Bug 2039013 Change-Id: Ic32326f81d48f11bc772afb8fee2dee6e427a699 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614114 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: RPC interface supportMahantesh Kumbar2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created nv_pmu_rpc_cmd & nv_pmu_rpc_msg struct, & added member rpc under pmu_cmd & pmu_msg - Created RPC header interface - Created RPC desc struct & added as member to pmu payload - Defined PMU_RPC_EXECUTE() to convert different RPC request to make generic RPC call. - nvgpu_pmu_rpc_execute() function to execute RPC request by creating required RPC payload & send request to PMU to execute. - nvgpu_pmu_rpc_execute() function as default callback handler for RPC if caller not provided callback - Modified nvgpu_pmu_rpc_execute() function to include check of RPC payload parameter. - Modified nvgpu_pmu_cmd_post() function to handle RPC payload request. JIRA GPUT19X-137 Change-Id: Iac140eb6b98d6bae06a089e71c96f15068fe7e7b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1613266 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ramin_big_page_size default val is set to 64kbSeema Khowala2018-01-18
| | | | | | | | | | | | | | | | | | | | | | | | -MMU_CTRL_USE_PDB_BIG_PAGE_SIZE is set to TRUE and hence RAMIN_BIG_PAGE_SIZE should be set to 64KB i.e. val 1. By default this is set to 128KB i.e. val 0. -This change will also fix an issue where perfbuffer_enable and nvgpu_init_hwpm function pass 0 as big page size while initializing inst_block and due to which ramin_big_page_size does not get updated to 64KB and remains set to unsupported 128KB value. -Volta supports 64KB for big pages. Selecting 128KB for big pages results in an UNBOUND_INSTANCE fault. Bug 200327596 Change-Id: Ie304e4e5ff7bedaead27e9380d64c59013dd64ca Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639540 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv1xx: remove scg_type from channel infoseshendra Gadagottu2018-01-18
| | | | | | | | | | | | | scg_type for graphics_compute0 and compute1 is deprecated for gv1xx. Remove it from setting in the channel info. Bug 1842197 Change-Id: I37354adcd82bb0ab648e0f04d47de796b79f91cd Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640440 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle SM reported MMU_NACK exceptionDeepak Nibade2018-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upon receiving MMU_FAULT error, MMU will forward MMU_NACK to SM If MMU_NACK is masked out, SM will simply release the semaphores And if semaphores are released before MMU fault is handled, user space could see that operation as successful incorrectly Fix this by handling SM reported MMU_NACK exception Enable MMU_NACK reporting in gv11b_gr_set_hww_esr_report_mask In MMU_NACK handling path, we just set the error notifier and clear the interrupt so that the User Space sees the error as soon as semaphores are released by SM And MMU_FAULT handling path will take care of triggering RC recovery anyways Also add necessary h/w accessors for mmu_nack Bug 2040594 Jira NVGPU-473 Change-Id: Ic925c2d3f3069016c57d177713066c29ab39dc3d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631708 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Free enabled flags on driver unloadAlex Waterman2018-01-10
| | | | | | | | | | | | Make sure the enabled flags are freed before the driver unloads. Bug 200369180 Change-Id: Ibac9ee61ca99bdfda03d76e393c7cd6cb6cc299a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1632752 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allocate from coherent poolDavid Nieto2018-01-08
| | | | | | | | | | | | | | | | | | | | Maps memory coherently on devices that are connected to a coherent bus. (1) Add code to be able to get the platform device node. (2) Create a new flag to mark if the device is connected to a coherent bus (3) Map memory coherently on coherent devices. bug 2040331 Change-Id: Ide83a9261acdbbc6e9fef4fc5f38d6f9d0e5ab5b Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1633985 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add guest_managed field in vm_gk20aSourab Gupta2018-01-04
| | | | | | | | | | | | | | | | | | | Add a field in vm_gk20a to identify guest managed VM, with the corresponding checks to ensure that there's no kernel section for guest managed VMs. Also make the __nvgpu_vm_init function available globally, so that the vm can be allocated elsewhere, requisite fields set, and passed to the function to initialize the vm. Change-Id: Iad841d1b8ff9c894fe9d350dc43d74247e9c5512 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1617171 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove TSG required flagTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | | | Remove nvgpu internal flag indicating that TSGs are required. We now require TSGs always. This also fixes a regression where CE channels were back to using bare channels on gp106. Bug 1842197 Change-Id: Id359e5a455fb324278636bb8994b583936490ffd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1628481 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Implement abstraction for finding TIDTerje Bergstrom2017-12-28
| | | | | | | | | | | | | Implement abstraction for finding the thread ID of thread currently being run. This is tracked for context switch tracing. In Linux kernel this is implemented by returning PID. Change-Id: Id46a318894f9a2ff3c85d2c8ef0b02c52783f122 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627239 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix crash on read fail of mc_boot_0_rSupriya2017-12-28
| | | | | | | | | | | | | | | | | | | | | This CL handles - erroneous use of boot_0 function pointer before being assigned in __nvgpu_check_gpu_state - And proper handling of error returned from gk20a_readl in gk20a_mc_boot_0 With these fixes crash is not seen in case mc_boot_0 read returns 0 in gk20a_mc_boot_0 - And also this handles the recursion caused by mc.boot_0() calling nvgpu_readl and nvgpu_readl in turn calling mc.boot_0 in case of read failure Bug 2010966 Change-Id: Ia087811c67d88948b7fc5fff35e0fabc6ea91989 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1616274 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Implement abstraction for finding TGIDTerje Bergstrom2017-12-27
| | | | | | | | | | | | | | | Implement abstraction for finding the process ID of thread currently being run. This is tracked for context switch tracing. In Linux kernel this is implemented by returning TGID. Change-Id: Ia6bcbd92c8cc25467694a35476e5d5f717194105 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1615985 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: scrub more fileds for sm l1 tagseshendra Gadagottu2017-12-27
| | | | | | | | | | | | | | | | | | | | | SM L1 tag needs to scrub for following additional fields: sm_l1_tag_ecc_control_scrub_pixprf sm_l1_tag_ecc_control_scrub_miss_fifo With this SM L1 TAG DBE errors after railgate/ungate are fixed. Bug 2039629 Change-Id: I10ce1d1dd28102f4c2f3fe2fe81801db67b76a21 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1626748 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: update thermal settingsseshendra Gadagottu2017-12-14
| | | | | | | | | | | | | | | For gv11b, update thermal settings as per hw POR: 1.Created gv11b specific HAL for init_therm_setup_hw 2.Update steps for gradual slowdown to 1x,1.5x,2x,4x,8x,16x,32x. 3.Modified gradual step duration cycles to 4. Bug 200365110 Change-Id: I93c28a3394857aacdf3d304103c9e7c25d4ad344 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1616600 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: implement ecc scrubberDeepak Goyal2017-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | Check the availability of ecc units by checking relevant ecc fuse and fuse overrides. During gpu boot, initialize ecc units by scrubbing individual ecc units available. ECC initialization should be done before gr initialization. Following ecc units are scrubbed: SM LRF SM L1 DATA SM L1 TAG SM CBU SM ICACHE Bug 200339497 Change-Id: I54bf8cc1fce639a9993bf80984dafc28dca0dba3 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612734 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: PMU parity HWW ECC supportDavid Nieto2017-12-11
| | | | | | | | | | | | | | | Adding support for ISR handling of ECC parity errors for PMU unit and setting the initial IRQDST mask to deliver ECC interrupts to host in the non-stall PMU irq path JIRA: GPUT19X-83 Change-Id: I8efae6777811893ecce79d0e32ba81b62c27b1ef Signed-off-by: David Nieto <dmartineznie@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1611625 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add include path for rmos sort.hDeepak Nibade2017-12-08
| | | | | | | | | | | | | | Add rmos sort.h include path in common sort.h if __KERNEL__ is undefined Jira NVGPU-447 Change-Id: I33f1e3a49ee43b1b69f9d678af77cb866dab412b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614108 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>