summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/vgpu/gp10b
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Do not assign GPU classes in vgpu HALTerje Bergstrom2017-11-13
| | | | | | | | | | | | | | | GPU class ids were moved to get_litter_value API, but vgpu was not updated to remove assigning them in HAL initialization. Remove the duplicate assignments. JIRA NVGPU-388 Change-Id: I65cf8f9cfcfc372c1c3b0d9239e55f19c9a02f46 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596247 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove NVGPU_ALLOC_OBJ_FLAGS_* from common codeDeepak Nibade2017-11-10
| | | | | | | | | | | | | | | | | | | | In gr_gp10b_alloc_gr_ctx(), we use linux specific flags NVGPU_ALLOC_OBJ_FLAGS_* Since common code should be independent of linux specific code, define new flags NVGPU_OBJ_CTX_FLAGS_SUPPORT_* in common code and use them wherever needed Linux code will parse the user flags and send appropriate flags to g->ops.gr.alloc_obj_ctx() Also remove use of NVGPU_ALLOC_OBJ_FLAGS_LOCKBOOST_ZERO since this seems to be deadcode anyways Jira NVGPU-382 Change-Id: Id82efe0d46ddc3e2c063610025ea57f283bc3510 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594452 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move fuse override DT handlingTerje Bergstrom2017-11-09
| | | | | | | | | | | | | | | | | | | | | | Move fuse override DT handling to Linux code. All the chip specific fuse override functions did the same thing, so delete the HAL and call the same function to read the DT overrides on all chips. Also remove the fuse override functionality from dGPU. There are no DT entries for PCIe devices, so it would've failed anyway. JIRA NVGPU-259 Change-Id: Iba64a5d53bf4eb94198c0408a462620efc2ddde4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593687 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for legacy mappingSami Kiminki2017-11-08
| | | | | | | | | | | | | | | | | | | | Make NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL mandatory for all map IOCTLs. We'll clean up the legacy kernel code in subsequent patches. Remove support for NVGPU_AS_IOCTL_MAP_BUFFER. It has been superseded by NVGPU_AS_IOCTL_MAP_BUFFER_EX. Remove legacy definitions to nvgpu_map_buffer_args and the related flags, and update the in-kernel map calls accordingly by switching to the newer definitions. Bug 1902982 Change-Id: Ie9a7f02b8d5d0ec7c3722c4481afab6d39b4fbd0 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560932 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: modify tsg enable sequenceAparna Das2017-11-01
| | | | | | | | | | | | | | | | | | TSG enable sequence in native has been modified due to a hardware bug requiring enabling all channels with NEXT and CTX_RELOAD set in a TSG, and then enabling rest of channels. However it is not possible to check if NEXT and CTX_RELOAD is set in vgpu. Have a separate implementation for enabling tsg sequence in vgpu till the fix for hardware bug is implemented for virtualized configuration. Bug 200348087 Change-Id: I6bfc52138bc540c0ea0ad18a85155eeff6f9efa8 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1588740 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: unset verify status ctx reloadAparna Das2017-10-29
| | | | | | | | | | | | | | | Native code for verifying tsg status on ctx reload is not possible on vgpu. Unset gops->fifo.tsg_verify_status_faulted operation for vgpu for now. This needs to be implemented separately for vgpu later. Bug 200348087 Change-Id: I73791401de1ce7b7f8644ea4f9ccae3fc51dc7aa Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585783 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: halify size of patch bufferDavid Nieto2017-10-26
| | | | | | | | | | | | | Allow per chip calculation of gr patch buffer size and set default to match hw default of 512 data-address pair entries (4K) bug 200350539 Change-Id: I6010c9e0304332825cb02612d3f10523ef27d128 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1584033 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add max_css_buffer_size characteristicPeter Daifuku2017-10-25
| | | | | | | | | | | | | | | | | | | | | | | | Add max_css_buffer_size to gpu characteristics. In the virtual case, the size of the cycle stats snapshot buffer is constrained by the size of the mempool shared between the guest OS and the RM server, so tools need to find out what is the maximum size allowed. In the native case, we return 0xffffffff to indicate that the buffer size is unbounded (subject to memory availability), in the virtual case we return the size of the mempool. Also collapse native init_cyclestats functions to a single version, as each chip had identical versions of the code. JIRA ESRM-54 Bug 200296210 Change-Id: I71764d32c6e71a0d101bd40f274eaa4bea3e5b11 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578930 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize czf_bypass only onceDeepak Nibade2017-10-25
| | | | | | | | | | | | | | | | | | | | We right now initialize czf_bypass value in gr_gp10b_init_preemption_state() which is run at every rail ungate And that results in any user specified value through sysfs getting lost after railgate To fix this, move initialization of czf_bypass to gk20a_init_gr_setup_sw() so that it gets initialized only once Add new HAL g->ops.gr.init_czf_bypass to initialize same and define it for gp10b/gp106/vgpu-gp10b Bug 2008262 Change-Id: I80a38ef527c86e32c6d64d0626b867239db9ea51 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585224 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move rest of CDE structures to LinuxTerje Bergstrom2017-10-17
| | | | | | | | | | | | | | Move rest of CDE structures to common/linux. This includes moving the per-chip firmware file interpretation functions, and removing CDE ops from HAL and adding it to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I59d8f44bddadecef81ad3c455b363a14034c5e13 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570403 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: flatten out vgpu halPeter Daifuku2017-10-13
| | | | | | | | | | | | | | | Instead of calling the native HAL init function then adding multiple layers of modification for VGPU, flatten out the sequence so that all entry points are set statically and visible in a single file. JIRA ESRM-30 Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574616 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLISTDavid Li2017-08-31
| | | | | | | | | | | | | | | | | | NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire current timeslice and reschedule from front of runlist. This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a channel start sooner after submit rather than waiting for natural timeslice expiration or block/finish of currently running channel. Bug 1968813 Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0 Signed-off-by: David Li <davli@nvidia.com> Reviewed-on: http://git-master/r/#/c/1537198 Reviewed-on: https://git-master.nvidia.com/r/1537198 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: get engines info from RM serverRichard Zhao2017-08-28
| | | | | | | | | | | | | | - get engines info from constants - remove according HAL from gp10b vgpu Jira VFND-3797 Change-Id: If010e59c358ab0519cb0d8d6211c0bcc20fc3723 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1536179 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: vgpu: gp10b: Add map failure debuggingAlex Waterman2017-08-28
| | | | | | | | | | | | | | | The vGPU mapping code in gp10b gives very little debugging info when there's a failure. This change adds much more detail to the error printing so that thee failures can more easily be debugged without needing to run the virtual guest locally. Change-Id: Ibb4412fd4ab322b366f0e08eaa399b7b90ea22c7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1544506 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: move TEGRA_VGPU_ATTRIB_PREEMPT_CTX_SIZE to constantsRichard Zhao2017-06-09
| | | | | | | | | | | | | | Also removed deprecated TEGRA_VGPU_ATTRIB_*, but leave a place holder in case someone wants to use this command in future. Jira VFND-3796 Change-Id: Ic36a59db238d276b0e3dd68a9d8ec5834a04333d Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1457497 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove duplicate \n from log messagesStephen Warren2017-05-26
| | | | | | | | | | | | | | | nvgpu_log/info/warn/err() internally add a \n to the end of the message. Hence, callers should not include a \n at the end of the message. Doing so results in duplicate \n being printed, which ends up creating empty log messages. Remove the duplicate \n from all err/warn messages. Bug 1928311 Change-Id: I99362c5327f36146f28ba63d4e68181589735c39 Signed-off-by: Stephen Warren <swarren@nvidia.com> Reviewed-on: http://git-master/r/1487232 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactor gk20a_vm_alloc_va()Alex Waterman2017-05-24
| | | | | | | | | | | | | | | | This function is an internal function to the VM manager that allocates virtual memory space in the GVA allocator. It is unfortunately used in the vGPU code, though. In any event, this patch cleans up and moves the implementation of these functions into the VM common code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477745 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add wrapper nvgpu/bug.hTerje Bergstrom2017-04-13
| | | | | | | | | | | | | Add wrapper header file nvgpu/bug.h. It #includes <linux/bug.h> in Linux. JIRA NVGPU-13 Change-Id: I7bf02ba554333f7cbd79d72bd1cb423c81ebcb49 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1461545 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: Use new error macrosTerje Bergstrom2017-04-10
| | | | | | | | | | | | | | | gk20a_err() and gk20a_warn() require a struct device pointer, which is not portable across operating systems. The new nvgpu_err() and nvgpu_warn() macros take struct gk20a pointer. Convert code to use the more portable macros. JIRA NVGPU-16 Change-Id: I071e8c50959bfa81730ca964d912bc69f9c7e6ad Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1457355 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: merge tegra_vgpu_t18x.h to tegra_vgpu.hRichard Zhao2017-04-07
| | | | | | | | | | | | No need to keep two vgpu headers anymore. Jira VFND-3796 Change-Id: I400cbfa5b2c0e62963eff247adcd9483be975379 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1457480 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename nvgpu DMA APIsAlex Waterman2017-04-06
| | | | | | | | | | | | | | Rename the nvgpu DMA APIs from gk20a_gmmu_alloc* to nvgpu_dma_alloc*. This better reflects the purpose of the APIs (to allocate DMA suitable memory) and avoids confusion with GMMU related code. JIRA NVGPU-12 Change-Id: I673d607db56dd6e44f02008dc7b5293209ef67bf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325548 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move DMA API to dma.hAlex Waterman2017-04-06
| | | | | | | | | | | | | | | Make an nvgpu DMA API include file so that the intricacies of the Linux DMA API can be hidden from the calling code. Also document the nvgpu DMA API. JIRA NVGPU-12 Change-Id: I7578e4c726ad46344b7921179d95861858e9a27e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1323326 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename mem_desc to nvgpu_memAlex Waterman2017-04-06
| | | | | | | | | | | | | | | | | Renaming was done with the following command: $ find -type f | \ xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g' Also rename mem_desc.[ch] to nvgpu_mem.[ch]. JIRA NVGPU-12 Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename gk20a_mem_* functionsAlex Waterman2017-04-06
| | | | | | | | | | | | Rename the functions used for mem_desc access to nvgpu_mem_*. JIRA NVGPU-12 Change-Id: Ibfdc1112d43f0a125e4487c250e3f977ffd2cd75 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1323325 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use new kmem API functions (vgpu/*)Alex Waterman2017-03-28
| | | | | | | | | | | | | | | Use the new kmem API functions in vgpu/*. Also reshuffle the order of some allocs in the vgpu init code to allow usage of the nvgpu kmem APIs. Bug 1799159 Bug 1823380 Change-Id: I6c6dcff03b406a260dffbf89a59b368d31a4cb2c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1318318 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: force gpu preepmtion policyAparna Das2017-03-14
| | | | | | | | | | | | | | | | | | Query the RM server to retrieve gpu preemption policy of guest based on pct configuration. If guest is not allowed to request wfi preemption mode then set context with either gfxp or cta preemption mode only. Jira VFND-3079 Jira VFND-3081 Change-Id: I60cbf121d6f0e2373568cf40b3dfdb4df76fe02d Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1280903 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Sachit Kadle <skadle@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: Remove nvgpu_gpuid_t18x.hAlex Waterman2017-03-02
| | | | | | | | | | | | | | | Remove nvgpu_gpuid_t18x.h since this file is now visible. Migrate the relevant definitions and defines into their expected places and make the code use the real defines. No longer is hiding t18x specific stuff necessary. Bug 1799159 Change-Id: I47fa2392e46fdb7aacc70aeb0cc8c3f5ca0dc22f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1300976 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ignore set preempt mode to defaultThomas Fleury2017-02-24
| | | | | | | | | | | | | | | | | | | In native case, attempting to set graphics/compute preempt mode to default is ignored if preemption mode has already been set for the context. In virtualized case an error is currently returned. Align behaviour for native and virtualized case, by ignoring such request. Bug 200186530 Change-Id: Ieb3a37107bdbd3284804ee9fd392786409224082 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1306506 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Organize semaphore_gk20a.[ch]Alex Waterman2017-02-13
| | | | | | | | | | | | | | | | | | | | | | | Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore code is common to all chips. Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu and rename it to semaphore.h. Also update all places where the header is inluced to use the new path. This revealed an odd location for the enum gk20a_mem_rw_flag. This should be in the mm headers. As a result many places that did not need anything semaphore related had to include the semaphore header file. Fixing this oddity allowed the semaphore include to be removed from many C files that did not need it. Bug 1799159 Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284627 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move gp10b HW headersAlex Waterman2017-01-11
| | | | | | | | | | | | | | | | | | | | Move the gp10b HW headers to a new directory specially for them: include/nvgpu/hw/gp10b And change the code to include like so: #include <nvgpu/hw/gp10b/hw_fb_gp10b.h> This is part of the process to restructure the nvgpu driver. Bug 1799159 Change-Id: Ic80ea5b7f5c280839e502e2178a345181f7a7ef9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1280326 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: no support for sparse mappingAparna Das2016-12-27
| | | | | | | | | | | | | | | | | | | | | | Currently sparse mapping is not supported for gp10b in virtualized environment. Modify gpu characteristics to reflect non-implementation of this functionality. Also fix return value in vgpu_gp10b_locked_gmmu_map() on error condition. Bug 200243373 Change-Id: Ia367b923b87738a5cad0617cdb074f5a24fb1c81 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1269710 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Sachit Kadle <skadle@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: fix va leak when call gk20a_vm_free_vaRichard Zhao2016-12-27
| | | | | | | | | | | | | | | | page size index needs to be set explicitly when call gk20a_vm_free_va. Bug 200255799 JIRA VFND-3033 Change-Id: Ic23ea68905ea423173d1859fd100e7b2c82a1bcc Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1262590 (cherry picked from commit 918aea147b395f7337db348d2616fb4b195dc53a) Reviewed-on: http://git-master/r/1263400 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add set_preemption_modeRichard Zhao2016-12-27
| | | | | | | | | | | | | Implement HAL callback set_preemption_mode Bug 200238497 JIRA VFND-2683 Change-Id: I8fca8e1ba112d8782ce18f0899eca38a1d12b512 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1236976 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: move to use vgpu_get_handle helper functionRichard Zhao2016-12-27
| | | | | | | | | | JIRA VFND-2103 Change-Id: Ic11cff40e64849cb6abb193bec54d03857433416 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1185205 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: gp10x: support in-kernel vidmem mappingsKonsta Holtta2016-12-27
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: Icd675e83e3c28836f0ed8880425748697713bb0a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169296 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: Add CE engine to engine listTerje Bergstrom2016-12-27
| | | | | | | | | | | | | Initialize CE engine also for gp10b. Change-Id: Ibce2f80b523a09fb1345995c03c5430f3b20844f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1170453 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Tested-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com>
* gpu: nvgpu: vgpu: manage gr_ctx as independent resourceRichard Zhao2016-12-27
| | | | | | | | | | | | | | | gr_ctx will managed as independent resource in RM server and vgpu can get a gr_ctx handle. Bug 1702773 Change-Id: Ifceb44b7d9a1ba03fc2a4df847f4a78ac4c4a0d4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1144934 (cherry picked from commit 0da3101d9b59fe1f9a47ce7b70b30cb8919f35ac) Reviewed-on: http://git-master/r/1150707 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: API to set preemption modeDeepak Nibade2016-12-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Separate out new API gr_gp10b_set_ctxsw_preemption_mode() which will check requested preemption modes and take appropriate action for each preemption mode This API will also do some sanity checking for valid preemption modes and combinations Define API set_preemption_mode() for gp10b which will set the preemption modes passed as argument and then use gr_gp10b_set_ctxsw_preemption_mode() and update_ctxsw_preemption_mode() to update preemption mode Legacy path from gr_gp10b_alloc_gr_ctx() will convert flags NVGPU_ALLOC_OBJ_FLAGS_* into appropriate preemption modes and then call gr_gp10b_set_ctxsw_preemption_mode() New API set_preemption_mode() will use new flags NVGPU_GRAPHICS/COMPUTE_PREEMPTION_MODE_* and set and update ctxsw preemption mode In gr_gp10b_update_ctxsw_preemption_mode(), update graphics context to set CTA premption mode if mode NVGPU_COMPUTE_PREEMPTION_MODE_CTA is set Also, define preemption modes in nvgpu-t18x.h and use them everywhere Remove old definitions of modes from gr_gp10b.h Bug 1646259 Change-Id: Ib4dc1fb9933b15d32f0122a9e52665b69402df18 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1131806 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gp10b: Allow importing makefile via includeTerje Bergstrom2016-12-27
| | | | | | | | | | | | | Refactor makefiles so that there is one makefile, and that file can be included in the main nvgpu build. Bug 1476801 Change-Id: I23ac451d695fc64064de2300e83b9d9487c52743 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1028353 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: vgpu: fix sparse warningsRichard Zhao2016-12-27
| | | | | | | | | | | Bug 200088648 Change-Id: I61be7b4787e9bc9bac310a8739977f43c38a67ee Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1000174 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: update interface to free GR ctxAingara Paramakuru2016-12-27
| | | | | | | | | | | | | | | | The server only releases ownership of the ctxsw buffer mappings after the GR ctx has been released. Update the sequence to account for this. JIRA VFND-1117 Bug 1708163 Change-Id: I3aed015805b4ca51433e7d37ad32de2f8353999f Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/922817 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: gp10b: map GfxP buffers as GPU cacheableAingara Paramakuru2016-12-27
| | | | | | | | | | | | | | | Some of the allocated buffers are used during normal graphics processing. Mark them as GPU cacheable to improve performance. Bug 1695718 Change-Id: I71d5d1538516e966526abe5e38a557776321597f Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/827087 (cherry picked from commit 60b40ac144c94e24a2c449c8be937edf8865e1ed) Reviewed-on: http://git-master/r/828493 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: set correct page size index for gp10bRichard Zhao2016-12-27
| | | | | | | | | | | | | | | VM server only know big page and small page, so convert gmmu_page_size_kernel to according page size index. JIRA VFND-890 Change-Id: Id1f932752b8ca33d14635ac9d71019364aa89dc4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/816359 (cherry picked from commit 5bfc4a2a55889f5457bd34aa06861c042ee67421) Reviewed-on: http://git-master/r/827131 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add interface to alloc ctxsw buffersAingara Paramakuru2016-12-27
| | | | | | | | | | | | | | | | gp10b introduces support for preemption (GfxP and CILP). Add a new interface to allow allocating buffers needed to support this functionality. Bug 1677153 Change-Id: I8578a7b0a4327f3496d852eeb8be5fc778e2c225 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/806963 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/817039 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add gp10b supportAingara Paramakuru2016-12-27
Add support for gp10b in a virtualized environment. Bug 1677153 VFND-693 Change-Id: I919ffa44c6773940a7a3411ee8bbc403a992b7cb Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/792556 Reviewed-on: http://git-master/r/806193 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>