summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/linux/vgpu
Commit message (Collapse)AuthorAge
* gpu: nvgpu: vgpu: add characteristic flag for syncpoint address supportLakshmanan M2018-02-16
| | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Bug 200327559 Change-Id: I20f532e22c29d1adaff0fbc4204e36cc8455e572 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657983 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Jitendra Pratap Singh Chauhan <jchauhan@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Fix CBC base calculusSami Kiminki2018-02-09
| | | | | | | | | | | | | | On GV11B, CBC base is calculated in similar fashion than it's calculated on dGPUs. Thus, remove gv11b_ltc_cbc_fix_config() as it would incorrectly multiply the CBC base by the LTC count. Bug 2054860 Change-Id: Iaed717161547468c17e12236149d970c497885b3 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654506 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: add scg support info in gpu characteristicsseshendra Gadagottu2018-02-02
| | | | | | | | | | | | | Indicated support for Simultaneous Compute and Graphics(SCG) in gpu characteristics for gv11b. Bug 2053932 Change-Id: I788e22242083dff775dd4cc5b9aa73c938028536 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup usage of bypass_smmuAlex Waterman2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GPU has multiple different operating modes in respect to IOMMU'ability. As such there needs to be a clean way to tell the driver whether it is IOMMU'able or not. This state also does not always reflect what is possible: all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it wants to. The nvgpu_iommuable() API has now existed for a little while which is a useful way to convey whether nvgpu should consider the GPU as IOMMU'able. However, there is also the g->mm.bypass_smmu flag which used to be able to override what the GPU decided it should do. Typically it was assigned the same value as nvgpu_iommuable() but that was not necessarily a requirment. This patch removes all the usages of g->mm.bypass_smmu and instead uses the nvgpu_iommuable() function. All places where the check against g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code should now be much cleaner. Subsequently other checks can also be placed in the nvgpu_iommuable() function. For example, when NVLINK comes online and the GPU should no longer consider DMA addresses and instead use scatter-gather lists directly the ngpu_iommuable() function will be able to check the state of NVLINK and then act accordingly. Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648332 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: fix resource leakAingara Paramakuru2018-01-31
| | | | | | | | | | | | | gr_ctx->tsgid needs to be set to ensure that the GR ctx free sequence will target the correct TSG's GR ctx. Bug 200341631 Change-Id: I83c57597f10ce3af572f114d28312376cea55c2a Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add vgpu_ivc_* wrappersRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | tegra_gr_comm_* are wrapped as vgpu_ivc_*, which helps make vgpu code more common. Jira EVLR-2364 Change-Id: Id49462ed6c176c73ceee8c6bc41104447748e187 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645656 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make .tsg_unbind_channel one layer lowerRichard Zhao2018-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | The message to tell RM server to unbind channel has to be sent after client unbinds the channel and before client calls tsg release. The channel has to belong to a tsg on RM server before client submit a runlist to remove the channel. Or there's a bare channel problem. By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel() will be common functions for all chip, and it'll call tsg release after call .tsg_unbind_channel. So vgpu won't need to worry about tsg was released before sending msg to RM server. Bug 200382695 Bug 200382785 Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1645383 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove virt_ctx from tegra_gr_commRichard Zhao2018-01-26
| | | | | | | | | | | | | | | | | | | queue index can already index the queues. It also help make the api more common. Jira EVLR-2364 Change-Id: I98a5014ba0510a2687fdf096a160c497bd1f6985 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646197 Reviewed-by: Damian Halas <dhalas@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: set detach_snapshot in gv11b gopsRichard Zhao2018-01-24
| | | | | | | | | | | | | | | It has to be set to detach snapshot. We missed it somehow. Jira VFND-4703 Change-Id: Ia5842494f86fb2d788d72ba372ee8870977a2f67 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640668 GVS: Gerrit_Virtual_Submit Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fold T19x code back to main code pathsTerje Bergstrom2018-01-23
| | | | | | | | | | | | | Lots of code paths were split to T19x specific code paths and structs due to split repository. Now that repositories are merged, fold all of them back to main code paths and structs and remove the T19x specific Kconfig flag. Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640606 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add sw method for SET_BES_CROP_DEBUG4seshendra Gadagottu2018-01-22
| | | | | | | | | | | | | | | | | | Added sw method support for SET_BES_CROP_DEBUG4. In this sw method: CLAMP_FP_BLEND_TO_MAXVAL forces overflow and CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval. Added support for this sw method in gp10b/gp106/gv11b and gv100. Bug 2046636 Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641529 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add g->sw_ready flagKonsta Holtta2018-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | Fix a race condition where we'd still be booting up the gpu and/or initializing the driver but elsewhere assume that all is done already. Some userspace APIs to make sure that we're ready by testing g->gr.sw_ready, but this flag is set in the middle of bootup; there are other things after gr initialization. Add a new flag that is enabled after bootup is fully complete at the end of finalize_poweron, and change the checks in user API paths to test the new flag only. These checks are only in the ioctl paths for ctrl, dbg and tsg, and in the ctrl device's opening path. The gr.sw_ready flag is still left there to signify whether just gr has had its bookkeeping initialized. Bug 200370011 Change-Id: I2995500e06de46430d9b835de1e9d60b3f01744e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640124 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix typoAlex Waterman2018-01-19
| | | | | | | | | | | | Rename gb10b_init_bar2_vm*() to gp10b_init_bar2_vm*(). Bug 200378257 Change-Id: I9f8a9ef42c82923200d7053c61bab2652b58cbc2 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639757 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Enable perfmon.Deepak Goyal2018-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | t19x PMU ucode uses RPC mechanism for PERFMON commands. - Declared "pmu_init_perfmon", "pmu_perfmon_start_sampling", "pmu_perfmon_stop_sampling" and "pmu_perfmon_get_samples" in pmu ops to differenciate for chips using RPC & legacy cmd/msg mechanism. - Defined and used PERFMON RPC commands for t19x - INIT - START - STOP - QUERY - Adds RPC handler for PERFMON RPC commands. - For guerying GPU utilization/load, we need to send PERFMON_QUERY RPC command for gv11b. - Enables perfmon for gv11b. Bug 2039013 Change-Id: Ic32326f81d48f11bc772afb8fee2dee6e427a699 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614114 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make graphics context property of TSGTerje Bergstrom2018-01-17
| | | | | | | | | | | | | | | | | | | | Move graphics context ownership to TSG instead of channel. Combine channel_ctx_gk20a and gr_ctx_desc to one structure, because the split between them was arbitrary. Move context header to be property of channel. Bug 1842197 Change-Id: I410e3262f80b318d8528bcbec270b63a2d8d2ff9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639532 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add l3 allocation supportAparna Das2018-01-12
| | | | | | | | | | | | | Modify rpc command parameter to support l3 cache allocation. Jira EVLR-1752 Change-Id: I1be00e04ee01c0763f46c0d0da6a112316cc7e1d Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1616566 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move t19x specific code to general codeRichard Zhao2018-01-12
| | | | | | | | | | | | | | - remove vgpu_t19x.h and tegra_vgpu_t19x.h - merge t19x specific ivc commands to the big enum - move TEGRA_VGPU_ATTRIB_MAX_SUBCTX_COUNT to constants Jira EVLR-2293 Change-Id: I34344bffa03bb69e1282b1f19382e3199f9ba105 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1636128 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Combine gk20a and gp10b free_gr_ctxTerje Bergstrom2018-01-12
| | | | | | | | | | | | | | gp10b version of free_gr_ctx was created to keep gp10b source code changes out from the mainline. gp10b was merged back to mainline a while ago, so this separation is no longer needed. Merge the two variants. Change-Id: I954b3b677e98e4248f95641ea22e0def4e583c66 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1635127 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: Delete gm20b supportTerje Bergstrom2018-01-12
| | | | | | | | | | | | Delete gm20b vgpu support. It has not been supported for a long time and keeping it up-to-date is extra work. Change-Id: I3c06d29a79cb83d53a25d2242247b4eeabeab310 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1635126 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: get virtual SMs mappingThomas Fleury2018-01-10
| | | | | | | | | | | | | | | On gv11b we can have multiple SMs per TPC. Add sm_per_tpc in vgpu constants to properly dimension the virtual SM to TPC/GPC mapping in virtualization case. Use TEGRA_VGPU_CMD_GET_SMS_MAPPING to query current mapping. Bug 2039676 Change-Id: I817be18f9a28cfb9bd8af207d7d6341a2ec3994b Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631203 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Do not disable ELPG when committing buffersTerje Bergstrom2018-01-04
| | | | | | | | | | | | | Committing buffer addresses only writes to the memory. There's no need to disable ELPG for the duration, so drop the ELPG protection. Change-Id: I8d8d08506387197e4737e0311df4a20085496056 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1631149 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove bare channel schedulingTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | Remove scheduling IOCTL implementations for bare channels. Also removes code that constructs bare channels in runlist. Bug 1842197 Change-Id: I6e833b38e24a2f2c45c7993edf939d365eaf41f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627326 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove TSG required flagTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | | | Remove nvgpu internal flag indicating that TSGs are required. We now require TSGs always. This also fixes a regression where CE channels were back to using bare channels on gp106. Bug 1842197 Change-Id: Id359e5a455fb324278636bb8994b583936490ffd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1628481 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add io coherency supportAparna Das2017-12-30
| | | | | | | | | | | | | Modify command message parameter to support io coherency. Jira EVLR-2025 Change-Id: I38b21c72d85f559555c4d97dab73d0f715ecc655 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1614388 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for channel eventsTerje Bergstrom2017-12-28
| | | | | | | | | | | | | | Remove support for events for bare channels. All users have already moved to TSGs and TSG events. Bug 1842197 Change-Id: Ib3ff68134ad9515ee761d0f0e19a3150a0b744ab Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1618906 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: allow disabling of ctxsw tracingThomas Fleury2017-12-14
| | | | | | | | | | | | | | | | | | | Fixed build failure that occurred when disabling FECS ctxsw tracing using CONFIG_GK20A_CTXSW_TRACE. JIRA EVLR-2162 Change-Id: I751eba835c5f3f527571167e8b05fadb9687c64d Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1617557 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Aparna Das <aparnad@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Dennis Kou <dkou@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Require TSGs for CE alwaysTerje Bergstrom2017-12-14
| | | | | | | | | | | All channels should be wrapped in TSGs so that bare channel support can be dropped. Bind all CE channels to TSGs. Bug 1842197 Change-Id: Ia55748d5b53750d860f7764b532ef9eeb6f214b8 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1616693
* gpu: nvgpu: vgpu: remove PMU setup in gv11b halRichard Zhao2017-12-11
| | | | | | | | | | | | | | | | | vgpu doesn't care about pmu. pmu is managed by RM server. It also fixed the dump caused by reading fuse register. Jira EVLR-1934 Change-Id: I779964950783ccf699cd99473fb30e811c5c2ed6 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612774 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add tsg release commandRichard Zhao2017-12-10
| | | | | | | | | | | | | | | | gv11b needs tsg release callback to release CE method buffer. Bug 2022929 Change-Id: I32e27a5fa49eb61b9c2fc72ea32034191a9be48e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1611631 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Aparna Das <aparnad@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvpgu: Move GR IDLE timeout definition to headerTerje Bergstrom2017-12-08
| | | | | | | | | | | | | GR IDLE timeout is defined as Kconfig. Instead of that introduce a new header file defaults.h which encapsulates any generic defaults we use in nvgpu, and move the definition there. Change-Id: I78ff1d2790d7ee3dff6df42bbd11cf683a85bf79 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612650 GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list APIs instead of linux APIsDeepak Nibade2017-12-08
| | | | | | | | | | | | | | | | | Use nvgpu specific list APIs nvgpu_list_for_each_entry() instead of calling Linux specific list APIs list_for_each_entry() Jira NVGPU-444 Change-Id: I3c1fd495ed9e8bebab1f23b6769944373b46059b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612442 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: modify subcontext header sizeAparna Das2017-12-01
| | | | | | | | | | | | | | | | | Per veid header mode is enabled for subcontext header. Allocate only context header size for subcontext header. Jira EVLR-2073 Change-Id: I2761dcac7e8e765acb6db22241e3a9214867f885 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1607627 Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Alignment check for compressible fixed-address mappingsSami Kiminki2017-11-30
| | | | | | | | | | | | | | | | | Add an alignment check for compressible-kind fixed-address mappings. If we're using page size smaller than the comptag line coverage window, the GPU VA and the physical buffer offset must be aligned in respect to that window. Bug 1995897 Bug 2011640 Bug 2011668 Change-Id: If68043ee2828d54b9398d77553d10d35cc319236 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1606439 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: gfxp wfi timeoutseshendra Gadagottu2017-11-29
| | | | | | | | | | | | | | | | | | | | | | For gv11b, configured gfx preemption wfi timeout in usec. Set timeout unit as usec in gr_gv11b_init_preemption_state. Used default timeout as 1msec and this timeout value can be modified through sysfs node: /sys/devices/gpu.0/gfxp_wfi_timeout_count For gp10b: gfxp_wfi_timeout_count is in syclk cycles For gv11b: gfxp_wfi_timeout_count is in usec Bug 2003668 Change-Id: I68d52ce996a83df90b8b3a8164debb07e5cb370f Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599658 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize gr->max_comptag_mem in linuxDeepak Nibade2017-11-28
| | | | | | | | | | | | | | | | | | | We initialize gr->max_comptag_mem in common code and with a global variable declared in <linux/mm.h> Move this linux specific dependency to linux specific files i.e. initialize gr->max_comptag_mem during linux specific probe functions Jira NVGPU-414 Change-Id: I9415938bf1288b24950ba7ecc71abee3162dae64 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1606195 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: define error_notifiers in common codeDeepak Nibade2017-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All the linux specific error_notifier codes are defined in linux specific header file <uapi/linux/nvgpu.h> and used in all the common driver But since they are defined in linux specific file, we need to move all the uses of those error_notifiers in linux specific code only Hence define new error_notifiers in include/nvgpu/error_notifier.h and use them in the common code Add new API nvgpu_error_notifier_to_channel_notifier() to convert common error_notifier of the form NVGPU_ERR_NOTIFIER_* to linux specific error notifier of the form NVGPU_CHANNEL_* Any future additions to error notifiers requires update to both the form of error notifiers Move all error notifier related metadata from channel_gk20a (common code) to linux specific structure nvgpu_channel_linux Update all accesses to this data from new structure instead of channel_gk20a Move and rename below APIs to linux specific file and declare them in error_notifier.h nvgpu_set_error_notifier_locked() nvgpu_set_error_notifier() nvgpu_is_error_notifier_set() Add below new API and use it in fifo_vgpu.c nvgpu_set_error_notifier_if_empty() Include <nvgpu/error_notifier.h> wherever new error_notifier codes are used NVGPU-426 Change-Id: Iaa5bfc150e6e9ec17d797d445c2d6407afe9f4bd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593361 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize os-specific features on vgpuDeepak Nibade2017-11-27
| | | | | | | | | | | | | | | | | | API to initialize os-specific features nvgpu_finalize_poweron_linux() does not get called for VGPU Add it to vgpu_pm_finalize_poweron() Jira NVGPU-395 Change-Id: I5488853aad36606c18b64a4fbe4076909a6b23f9 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1603913 GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove dependency on linux header for regops_gk20a*Debarshi Dutta2017-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the dependency on the header file "uapi/linux/nvgpu.h" for regops_gk20a.c. The original structure and definitions in the uapi/linux/nvgpu.h is maintained for userspace libnvrm_gpu.h. The following changes are made in this patch. 1) Defined common versions of the NVGPU_DBG_GPU_REG_OP* definitions inside regops_gk20a.h. 2) Defined common version of struct nvgpu_dbg_gpu_reg_op inside regops_gk20a.h naming it struct nvgpu_dbg_reg_op. 3) Constructed APIs to convert the NVGPU_DBG_GPU_REG_OP* definitions from linux versions to common and vice versa. 4) Constructed APIs to convert from struct nvgpu_dbg_gpu_reg_op to struct nvgpu_dbg_reg_op and vice versa. 5) The ioctl handler nvgpu_ioctl_channel_reg_ops first copies from userspace into a local storage based on struct nvgpu_dbg_gpu_reg_op which is copied into the struct nvgpu_dbg_reg_op using the APIs above and after executing the regops handler passes the data back into userspace by copying back data from struct nvgpu_dbg_reg_op to struct nvgpu_dbg_gpu_reg_opi. JIRA NVGPU-417 Change-Id: I23bad48d2967a629a6308c7484f3741a89db6537 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596972 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add check_priv_security fuse opsSeema Khowala2017-11-22
| | | | | | | | | | | | | | | | | | | | | | | -New fuse ops is added to set NVGPU_SEC_PRIVSECURITY and NVGPU_SEC_SECUREGPCCS bits in g->enabled_flags during hal initialization -For igpu non simulation platforms, fuses are read to decide if gpu should be allowed to boot or not. --Do not boot gpu if priv_sec_en is set but wpr_enabled is not set to 1 or vpr_auto_fetch_disable is not set to 0 --With priv_sec_en set, all falcons have to boot in LS mode and this needs wpr_enabled set to 1 AND vpr_auto_fetch_disable set to 0. In this case gmmu tries to pull wpr and vpr settings from tegra mc Bug 2018223 Change-Id: Iceaa1b0b3214e9a3d6cef5d77a82e034302f748b Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595454 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: CONFIG_TEGRA_ACR is supported by defaultSeema Khowala2017-11-22
| | | | | | | | | | | | TEGRA_ACR config is supposed to be enabled maxwell onwards. Since gk20a support is no longer supported, delete code that is not under TEGRA_ACR config Change-Id: Id52485680bca1ceaadcb94f9603c0898c2002e02 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595437 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove separation of t18x codeTerje Bergstrom2017-11-17
| | | | | | | | | | | Remove separation of t18x specific code and fields and the associated ifdefs. We can build T18x code in always. Change-Id: I4e8eae9c30335632a2da48b418c6138193831b4f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595431 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add translation for NVGPU MM flagsAlex Waterman2017-11-17
| | | | | | | | | | | | | | | | | | | | | | | Add a translation layer to convert from the NVGPU_AS_* flags to to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags. This allows the common MM code to not depend on the UAPI header defined for Linux. In addition to this change a couple of other small changes were made: 1. Deprecate, print a warning, and ignore usage of the NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag. 2. Move the t19x IO coherence flag from the t19x UAPI header to the regular UAPI header. JIRA NVGPU-293 Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599802 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move vgpu code to linuxDeepak Nibade2017-11-17
Most of VGPU code is linux specific but lies in common code So until VGPU code is properly abstracted and made os-independent, move all of VGPU code to linux specific directory Handle corresponding Makefile changes Update all #includes to reflect new paths Add GPL license to newly added linux files Jira NVGPU-387 Change-Id: Ic133e4c80e570bcc273f0dacf45283fefd678923 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599472 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>