summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/vgpu/gp10b
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: add HAL to set ppriv timeoutsDeepak Nibade2018-04-22
| | | | | | | | | | | | | | | | | | Add new HAL gops.bus.set_ppriv_timeout_settings() to set platform specific ppriv timeouts Set this HAL for all supported GPUs for now Jira NVGPUT-35 Change-Id: I88b438a7bf381d0216e0947a16cd267461d0e8d7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1699314 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast listDeepak Nibade2018-04-16
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs and just calculate the unicast list assuming all FBPAs are present This generates incorrect list of unicast addresses Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr Set gr_gv100_get_active_fpba_mask() for GV100 Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate unicast list only for active FBPAs Bug 200398811 Jira NVGPU-556 Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694444 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL for chiplet offsetDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and NV_PMM_FBP_STRIDE which are incorrect for Volta Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta Use HAL instead of hard coded values wherever required Bug 200398811 Jira NVGPU-556 Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690028 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support to get unicast addresses on voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We have new broadcast registers on Volta, and we need to generate correct unicast addresses for them so that we can write those registers to context image Add new GR HAL create_priv_addr_table() to do this conversion Set gr_gk20a_create_priv_addr_table() for older chips Set gr_gv11b_create_priv_addr_table() for Volta gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate appriate list of unicast register for each broadcast register Bug 200398811 Jira NVGPU-556 Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690027 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add broadcast address decode support for voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Volta we have more number of broadcast registers than previous chips and we don't decode them right now in gr_gk20a_decode_priv_addr() Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all previous chips Add and use gr_gv11b_decode_priv_addr() for Volta gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set the broadcast flags apporiately Define below new broadcast types PRI_BROADCAST_FLAGS_PMMGPC PRI_BROADCAST_FLAGS_PMM_GPCS PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB PRI_BROADCAST_FLAGS_PMMFBP PRI_BROADCAST_FLAGS_PMM_FBPS PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP Bug 200398811 Jira NVGPU-556 Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690026 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: fix build errors on qnxRichard Zhao2018-04-10
| | | | | | | | | | | | | | | | | | - Declare global functions before reaching the implementation. - avoid using current (current process). - assign ch->pid/tgid before using them. Jira VFND-4870 Change-Id: I688a1b89ef4d5dcf046929eab11d7e523caba0a5 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1687142 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Only use gr.create_gr_sysfs with CONFIG_SYSFSAlex Waterman2018-04-05
| | | | | | | | | | | | | | Only populate the create_gr_sysfs() functions when the system actually has SYSFS (i.e is compiling for the Linux kernel). This allows non- Linux systems to compile. JIRA NVGPU-525 Change-Id: I3bac34feff376d89c0b63259772c77f7b4a03adc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673824 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.setup_swSourab Gupta2018-03-29
| | | | | | | | | | | | | bar1/userd setup is different for RM server. created common function gk20a_init_fifo_setup_sw_common. Jira VQRM-3058 Change-Id: I655b54e21ed5f15dcb8e7b01bd9cd129b35ae7a3 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665691 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.set_error_notifierRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server overrides it for handling stall interrupts. Jira VQRM-3058 Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.channel_suspend/channel_resumeRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server acts differently for channel suspend/resume. Jira VQRM-3058 Change-Id: If41e3099164654db448d1157fd7f51dd00c5e201 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.check_tsg_ctxsw_timeout/check_ch_ctxsw_timeoutRichard Zhao2018-03-29
| | | | | | | | | | | | | RM Server acts differently for ctxsw timeout check. It won't check GP_GET or accumulated timeouts, but notify guest and go to recovery. Jira VQRM-3058 Change-Id: I428aea34dc517311eb7e73feb556145e916309fb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.ch_abort_clean_upRichard Zhao2018-03-29
| | | | | | | | | | | | | | Channel abort clean up is only needed by native and vgpu driver but not RM server. RM server expects guest will clean up itself. RM server should not set the callback. Jira VQRM-3058 Change-Id: I11b49b6f2d51c871e31de16955d487dca82609cb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix PMA list alignment in ctxsw bufferDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in ctxsw buffer to be 256 byte aligned but same change is not applied to other chip ucodes ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other chips except GV100 Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix the required alignment in this function Bug 1998067 Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676655 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entriesDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LIST_nv_pm_fbpa_ctx_regs, we right now call add_ctxsw_buffer_map_entries_subunits() to add registers corresponding to all the FBPAs But while configuring total number of registers, we do not consider floorswept FBPAs and that causes misalignment in subsequent lists for GV100 Fix this by reading disabled/floorswept FBPAs from fuse and consider only those FBPAs which are active for GV100 Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100 Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned implementation to consider floorsweeping Bug 1998067 Change-Id: Id560551bb0b8142791c117b6d27864566c90b489 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676654 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add placeholder for IPA to PAThomas Fleury2018-03-13
| | | | | | | | | | | | | | | | Add __nvgpu_sgl_phys function that can be used to implement IPA to PA translation in a subsequent change. Adapt existing function prototypes to add pointer to gpu context, as we will need to check if IPA to PA translation is needed. JIRA EVLR-2442 Bug 200392719 Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673142 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hal for syncpt_incr_per_releaseseshendra Gadagottu2018-03-12
| | | | | | | | | | | | | | Create hal to indicate syncpt increments per release. Legacy chip uses 2 syncpt increments per release and gv1xx onwards uses 1 syncpt increment per release. Bug 2066025 Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1669817 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.semaphore_wakeup HALRichard Zhao2018-03-06
| | | | | | | | | | | | | | | | | vserver handles semaphore differently from native, so it needs a callback to differentiate from native. Also created common function mc_gk20a_handle_intr_nonstall to handle all nonstall interrupts. Jira VQRM-2982 Change-Id: I1b3821717a4005ca4bf2a4dac5dcd335872f48f1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656753 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add debugger.post_events HAL opAparna Das2018-03-06
| | | | | | | | | | | | RM Server will need to set specific HAL op and notify vgpu client. Jira VQRM-2982 Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660226 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add hal op to handle post event idAparna Das2018-03-06
| | | | | | | | | | | | | | The vserver variant for gr post event id needs different functionality to send interrupt to VM. Add HAL operation to allow overriding vserver usecase. Jira VQRM-2982 Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658382 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: introduce explicit nvgpu_sgl typeKonsta Holtta2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl) argument which is a void pointer. Change the type signatures to take struct nvgpu_sgl * which is an opaque marker type that makes it more difficult to pass around wrong arguments, as anything goes for void *. Explicit types add also self-documentation to the code. For some added safety, some explicit type casts are now required in implementors of the nvgpu_sgt_ops interface when converting between the general nvgpu_sgl type and implementation-specific types. This is not purely a bad thing because the casts explain clearly where type conversions are happening. Jira NVGPU-30 Jira NVGPU-52 Jira NVGPU-305 Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643555 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move common files out of linux folderRichard Zhao2018-02-27
| | | | | | | | | | | | | Most of files have been moved out of linux folder. More code could be common as halifying going on. Jira EVLR-2364 Change-Id: Ia9dbdbc82f45ceefe5c788eac7517000cd455d5e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649947 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move vgpu code to linuxDeepak Nibade2017-11-17
| | | | | | | | | | | | | | | | | | | | Most of VGPU code is linux specific but lies in common code So until VGPU code is properly abstracted and made os-independent, move all of VGPU code to linux specific directory Handle corresponding Makefile changes Update all #includes to reflect new paths Add GPL license to newly added linux files Jira NVGPU-387 Change-Id: Ic133e4c80e570bcc273f0dacf45283fefd678923 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1599472 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Do not include UAPI in gr_gk20a.hTerje Bergstrom2017-11-16
| | | | | | | | | | | | | | | | Remove #include of <uapi/linux/nvgpu.h> from gr_gk20a.h. vgpu_mm_gp10b.c uses UAPI definitions, so add an explicit #include there. JIRA NVGPU-363 Change-Id: Ieabd7240d62495d2719d7fdbc25cc238de13c75e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598981 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move to use is_valid_gfx/compute_class opsRichard Zhao2017-11-16
| | | | | | | | | | | | | | It'll make the code be able to apply to gv11b too. Jira EVLR-1671 Change-Id: I9a960fd1aaa9adc6bb39aa2c730049e75006fea7 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597379 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLsDeepak Nibade2017-11-15
| | | | | | | | | | | | | | | TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using combination of timeslice and interleave levels to decide the priority Hence remove the IOCTLs and all corresponding APIs Jira NVGPU-393 Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: define preemption modes in common codeDeepak Nibade2017-11-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We use linux specific graphics/compute preemption modes defined in uapi header (and of below form) in all over common code NVGPU_GRAPHICS_PREEMPTION_MODE_* NVGPU_COMPUTE_PREEMPTION_MODE_* Since common code should be independent of linux specific code, define new modes of the form in common code and used them everywhere NVGPU_PREEMPTION_MODE_GRAPHICS_* NVGPU_PREEMPTION_MODE_COMPUTE_* Add required parser functions to convert both the modes into each other For linux IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE, we need to convert linux specific modes into common modes first before passing them to common code And to pass gpu characteristics to user space we need to first convert common modes into linux specific modes and then pass them to user space Jira NVGPU-392 Change-Id: I8c62c6859bdc1baa5b44eb31c7020e42d2462c8c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596930 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Do not assign GPU classes in vgpu HALTerje Bergstrom2017-11-13
| | | | | | | | | | | | | | | GPU class ids were moved to get_litter_value API, but vgpu was not updated to remove assigning them in HAL initialization. Remove the duplicate assignments. JIRA NVGPU-388 Change-Id: I65cf8f9cfcfc372c1c3b0d9239e55f19c9a02f46 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596247 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove NVGPU_ALLOC_OBJ_FLAGS_* from common codeDeepak Nibade2017-11-10
| | | | | | | | | | | | | | | | | | | | In gr_gp10b_alloc_gr_ctx(), we use linux specific flags NVGPU_ALLOC_OBJ_FLAGS_* Since common code should be independent of linux specific code, define new flags NVGPU_OBJ_CTX_FLAGS_SUPPORT_* in common code and use them wherever needed Linux code will parse the user flags and send appropriate flags to g->ops.gr.alloc_obj_ctx() Also remove use of NVGPU_ALLOC_OBJ_FLAGS_LOCKBOOST_ZERO since this seems to be deadcode anyways Jira NVGPU-382 Change-Id: Id82efe0d46ddc3e2c063610025ea57f283bc3510 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594452 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move fuse override DT handlingTerje Bergstrom2017-11-09
| | | | | | | | | | | | | | | | | | | | | | Move fuse override DT handling to Linux code. All the chip specific fuse override functions did the same thing, so delete the HAL and call the same function to read the DT overrides on all chips. Also remove the fuse override functionality from dGPU. There are no DT entries for PCIe devices, so it would've failed anyway. JIRA NVGPU-259 Change-Id: Iba64a5d53bf4eb94198c0408a462620efc2ddde4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593687 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for legacy mappingSami Kiminki2017-11-08
| | | | | | | | | | | | | | | | | | | | Make NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL mandatory for all map IOCTLs. We'll clean up the legacy kernel code in subsequent patches. Remove support for NVGPU_AS_IOCTL_MAP_BUFFER. It has been superseded by NVGPU_AS_IOCTL_MAP_BUFFER_EX. Remove legacy definitions to nvgpu_map_buffer_args and the related flags, and update the in-kernel map calls accordingly by switching to the newer definitions. Bug 1902982 Change-Id: Ie9a7f02b8d5d0ec7c3722c4481afab6d39b4fbd0 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560932 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: modify tsg enable sequenceAparna Das2017-11-01
| | | | | | | | | | | | | | | | | | TSG enable sequence in native has been modified due to a hardware bug requiring enabling all channels with NEXT and CTX_RELOAD set in a TSG, and then enabling rest of channels. However it is not possible to check if NEXT and CTX_RELOAD is set in vgpu. Have a separate implementation for enabling tsg sequence in vgpu till the fix for hardware bug is implemented for virtualized configuration. Bug 200348087 Change-Id: I6bfc52138bc540c0ea0ad18a85155eeff6f9efa8 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1588740 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: unset verify status ctx reloadAparna Das2017-10-29
| | | | | | | | | | | | | | | Native code for verifying tsg status on ctx reload is not possible on vgpu. Unset gops->fifo.tsg_verify_status_faulted operation for vgpu for now. This needs to be implemented separately for vgpu later. Bug 200348087 Change-Id: I73791401de1ce7b7f8644ea4f9ccae3fc51dc7aa Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585783 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: halify size of patch bufferDavid Nieto2017-10-26
| | | | | | | | | | | | | Allow per chip calculation of gr patch buffer size and set default to match hw default of 512 data-address pair entries (4K) bug 200350539 Change-Id: I6010c9e0304332825cb02612d3f10523ef27d128 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584033 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add max_css_buffer_size characteristicPeter Daifuku2017-10-25
| | | | | | | | | | | | | | | | | | | | | | | | Add max_css_buffer_size to gpu characteristics. In the virtual case, the size of the cycle stats snapshot buffer is constrained by the size of the mempool shared between the guest OS and the RM server, so tools need to find out what is the maximum size allowed. In the native case, we return 0xffffffff to indicate that the buffer size is unbounded (subject to memory availability), in the virtual case we return the size of the mempool. Also collapse native init_cyclestats functions to a single version, as each chip had identical versions of the code. JIRA ESRM-54 Bug 200296210 Change-Id: I71764d32c6e71a0d101bd40f274eaa4bea3e5b11 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578930 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize czf_bypass only onceDeepak Nibade2017-10-25
| | | | | | | | | | | | | | | | | | | | We right now initialize czf_bypass value in gr_gp10b_init_preemption_state() which is run at every rail ungate And that results in any user specified value through sysfs getting lost after railgate To fix this, move initialization of czf_bypass to gk20a_init_gr_setup_sw() so that it gets initialized only once Add new HAL g->ops.gr.init_czf_bypass to initialize same and define it for gp10b/gp106/vgpu-gp10b Bug 2008262 Change-Id: I80a38ef527c86e32c6d64d0626b867239db9ea51 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585224 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move rest of CDE structures to LinuxTerje Bergstrom2017-10-17
| | | | | | | | | | | | | | Move rest of CDE structures to common/linux. This includes moving the per-chip firmware file interpretation functions, and removing CDE ops from HAL and adding it to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I59d8f44bddadecef81ad3c455b363a14034c5e13 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570403 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: flatten out vgpu halPeter Daifuku2017-10-13
| | | | | | | | | | | | | | | Instead of calling the native HAL init function then adding multiple layers of modification for VGPU, flatten out the sequence so that all entry points are set statically and visible in a single file. JIRA ESRM-30 Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574616 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLISTDavid Li2017-08-31
| | | | | | | | | | | | | | | | | | NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire current timeslice and reschedule from front of runlist. This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a channel start sooner after submit rather than waiting for natural timeslice expiration or block/finish of currently running channel. Bug 1968813 Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0 Signed-off-by: David Li <davli@nvidia.com> Reviewed-on: http://git-master/r/#/c/1537198 Reviewed-on: https://git-master.nvidia.com/r/1537198 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: get engines info from RM serverRichard Zhao2017-08-28
| | | | | | | | | | | | | | - get engines info from constants - remove according HAL from gp10b vgpu Jira VFND-3797 Change-Id: If010e59c358ab0519cb0d8d6211c0bcc20fc3723 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1536179 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: vgpu: gp10b: Add map failure debuggingAlex Waterman2017-08-28
| | | | | | | | | | | | | | | The vGPU mapping code in gp10b gives very little debugging info when there's a failure. This change adds much more detail to the error printing so that thee failures can more easily be debugged without needing to run the virtual guest locally. Change-Id: Ibb4412fd4ab322b366f0e08eaa399b7b90ea22c7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544506 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: move TEGRA_VGPU_ATTRIB_PREEMPT_CTX_SIZE to constantsRichard Zhao2017-06-09
| | | | | | | | | | | | | | Also removed deprecated TEGRA_VGPU_ATTRIB_*, but leave a place holder in case someone wants to use this command in future. Jira VFND-3796 Change-Id: Ic36a59db238d276b0e3dd68a9d8ec5834a04333d Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1457497 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove duplicate \n from log messagesStephen Warren2017-05-26
| | | | | | | | | | | | | | | nvgpu_log/info/warn/err() internally add a \n to the end of the message. Hence, callers should not include a \n at the end of the message. Doing so results in duplicate \n being printed, which ends up creating empty log messages. Remove the duplicate \n from all err/warn messages. Bug 1928311 Change-Id: I99362c5327f36146f28ba63d4e68181589735c39 Signed-off-by: Stephen Warren <swarren@nvidia.com> Reviewed-on: http://git-master/r/1487232 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactor gk20a_vm_alloc_va()Alex Waterman2017-05-24
| | | | | | | | | | | | | | | | This function is an internal function to the VM manager that allocates virtual memory space in the GVA allocator. It is unfortunately used in the vGPU code, though. In any event, this patch cleans up and moves the implementation of these functions into the VM common code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477745 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add wrapper nvgpu/bug.hTerje Bergstrom2017-04-13
| | | | | | | | | | | | | Add wrapper header file nvgpu/bug.h. It #includes <linux/bug.h> in Linux. JIRA NVGPU-13 Change-Id: I7bf02ba554333f7cbd79d72bd1cb423c81ebcb49 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1461545 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: Use new error macrosTerje Bergstrom2017-04-10
| | | | | | | | | | | | | | | gk20a_err() and gk20a_warn() require a struct device pointer, which is not portable across operating systems. The new nvgpu_err() and nvgpu_warn() macros take struct gk20a pointer. Convert code to use the more portable macros. JIRA NVGPU-16 Change-Id: I071e8c50959bfa81730ca964d912bc69f9c7e6ad Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1457355 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>