summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/vgpu
Commit message (Collapse)AuthorAge
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: add zbc support to vgpuRichard Zhao2015-06-06
| | | | | | | | | | | | | | | | | | | For both adding and querying zbc entry, added callbacks in gr ops. Native gpu driver (gk20a) and vgpu will both hook there. For vgpu, it will add or query zbc entry from RM server. Bug 1558561 Change-Id: If8a4850ecfbff41d8592664f5f93ad8c25f6fbce Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/732775 (cherry picked from commit a3787cf971128904c2712338087685b02673065d) Reviewed-on: http://git-master/r/737880 (cherry picked from commit fca2a0457c968656dc29455608f35acab094d816) Reviewed-on: http://git-master/r/753278 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space""""Bharat Nihalani2015-06-04
| | | | | | | | | | | | | | | This reverts commit 2e5803d0f2b7d7a1577a40f45ab9f3b22ef2df80 since the issue seen with bug 200106514 is fixed with change http://git-master/r/#/c/752080/. Bug 200112195 Change-Id: I588151c2a7ea74bd89dc3fd48bb81ff2c49f5a0a Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/752503 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space"""Bharat Nihalani2015-06-02
| | | | | | | | | | | This reverts commit ce1cf06b9a8eb6314ba0ca294e8cb430e1e141c0 since it causes GPU pbdma interrupt to be generated. Bug 200106514 Change-Id: If3ed9a914c4e3e7f3f98c6609c6dbf57e1eb9aad Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/749291
* gpu:nvgpu: update channel_setup_ramfc interfaceSeshendra Gadagottu2015-06-02
| | | | | | | | | | | | | Pass flags parameter to channel_setup_ramfc for indicating nvgpu_alloc_gpfifo_args characteristics. Bug 1645628 Change-Id: Ia40b37c5c7b208d459aa84f1b022036dd5e1b599 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/744526 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "gpu: nvgpu: New allocator for VA space""Alex Waterman2015-05-19
| | | | | | | | | | | | | | This reverts commit 7eb42bc239dbd207208ff491c3fb65c3d83274d8. The original commit was actually fine. Change-Id: I564ce6530ac73fcfad17dcec9c53f0353b4f02d4 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/743300 (cherry picked from commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff) Reviewed-on: http://git-master/r/743301 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: New allocator for VA space"Terje Bergstrom2015-05-12
| | | | | | | | | | | This reverts commit 2e235ac150fa4af8632c9abf0f109a10973a0bf5. Change-Id: I3aa745152124c2bc09c6c6dc5aeb1084ae7e08a4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/741469 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
* gpu: nvgpu: New allocator for VA spaceAlex Waterman2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a new buddy allocation scheme for the GPU's VA space. The bitmap allocator was using too much memory and is not a scaleable solution as the GPU's address space keeps getting bigger. The buddy allocation scheme is much more memory efficient when the majority of the address space is not allocated. The buddy allocator is not constrained by the notion of a split address space. The bitmap allocator could only manage either small pages or large pages but not both at the same time. Thus the bottom of the address space was for small pages, the top for large pages. Although, that split is not removed quite yet, the new allocator enables that to happen. The buddy allocator is also very scalable. It manages the relatively small comptag space to the enormous GPU VA space and everything in between. This is important since the GPU has lots of different sized spaces that need managing. Currently there are certain limitations. For one the allocator does not handle the fixed allocations from CUDA very well. It can do so but with certain caveats. The PTE page size is always set to small. This means the BA may place other small page allocations in the buddies around the fixed allocation. It does this to avoid having large and small page allocations in the same PDE. Change-Id: I501cd15af03611536490137331d43761c402c7f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/740694 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use common allocator for patchTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: Idf51831e8be9cabe1ab9122b18317137fde6339f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/721030 Reviewed-on: http://git-master/r/737530 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com>
* gpu: nvgpu: Use common allocator for contextTerje Bergstrom2015-04-04
| | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: I10c226e2377aa867a5cf11be61d08a9d67206b1d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/720507
* gpu: nvgpu: add platform specific get_iova_addr()Deepak Nibade2015-04-04
| | | | | | | | | | | | | | | | | | Add platform specific API pointer (*get_iova_addr)() which can be used to get iova/physical address from given scatterlist and flags Use this API with g->ops.mm.get_iova_addr() instead of calling API gk20a_mm_iova_addr() which makes it platform specific Bug 1605653 Change-Id: I798763db1501bd0b16e84daab68f6093a83caac2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/713089 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add new attributesAingara Paramakuru2015-04-04
| | | | | | | | | | | | Add support for reading num FBPs and FBP enable mask. Bug 1621056 Change-Id: I92ec1123373308ed280d4ffd30fe77ae6073ac45 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/715826 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu: nvgpu: Add DT support for gpu power-domainSumit Singh2015-04-04
| | | | | | | | | | | | | First, defining a new structure to support gk20a power domain. Then making necessary modifications to add so as to add DT support for gpu power-domain. bug 200070810 Change-Id: I29e1c24b181e14743d3969103abfd1882d171f07 Signed-off-by: Sumit Singh <sumsingh@nvidia.com> Reviewed-on: http://git-master/r/668973 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: vgpu: remove explicit TLB invalidateAingara Paramakuru2015-04-04
| | | | | | | | | | | | | | The server does an implicit TLB invalidate after map and unmap operations. Bug 1616964 Change-Id: Ib6f4a23389f1e5d796d0f4b0be312f438c52927c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/713221 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: implement to initialize gr->gpc_tpc_maskHaley Teng2015-04-04
| | | | | | | | | | | | Bug 1509609 Change-Id: Ia78bd49518b41bc9f59e3d47a1390b126c7a2230 Signed-off-by: Haley Teng <hteng@nvidia.com> Reviewed-on: http://git-master/r/706861 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Jubeom Kim <jubeomk@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor page mapping codeTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Pass always the directory structure to mm functions instead of pointers to members to it. Also split update_gmmu_ptes_locked() into smaller functions, and turn the hard coded MMU levels (PDE, PTE) into run-time parameters. Change-Id: I315ef7aebbea1e61156705361f2e2a63b5fb7bf1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/672485 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Unify PDE & PTE structsTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce a new struct gk20a_mm_entry. Allocate and store PDE and PTE arrays using the same structure. Always pass pointer to this struct when possible between functions in memory code. Change-Id: Ia4a2a6abdac9ab7ba522dafbf73fc3a3d5355c5f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/696414
* gpu: nvgpu: vgpu: fix AS splitAingara Paramakuru2015-04-04
| | | | | | | | | | | | | | | The GVA was increased to 128GB but for vgpu, the split was not updated to reflect the correct small and large page split (16GB for small pages, rest for large pages). Bug 1606860 Change-Id: Ieae056d6a6cfd2f2fc5066d33e1247d2a96a3616 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/681340 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: TLB invalidate after map/unmapTerje Bergstrom2015-04-04
| | | | | | | | | | | Always invalidate TLB after mapping or unmapping, and remove the delayed TLB invalidate. Change-Id: I6df3c5c1fcca59f0f9e3f911168cb2f913c42815 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/696413 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Set compression page per SoCTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Compression page size varies depending on architecture. Make it 129kB on gk20a and gm20b. Also export some common functions from gm20b. Bug 1592495 Change-Id: Ifb1c5b15d25fa961dab097021080055fc385fecd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/673790
* gpu: nvgpu: vgpu: handle fifo and gr exceptionsAingara Paramakuru2015-04-04
| | | | | | | | | | | | Handle the gr and fifo exceptions delivered from the server and update the channel state as needed. Bug 1551865 Change-Id: Ie19626c6e8a72f92ffd134983fe6d84e5c6c8736 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/670329 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move debug dump to HALAingara Paramakuru2015-04-04
| | | | | | | | | | | | Move the debug dump to HAL and add a stub for vgpu. Bug 1595164 Change-Id: Ifdcdd8a8caca7a41919dad075fee1c87032f53b0 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/662722 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: fix comptag alloc failureAingara Paramakuru2015-04-04
| | | | | | | | | | | | | setup_buffer_kind_and_compression() expects vm->big_page_size to be set, which was not done for the vgpu case. Bug 200064162 Change-Id: I15af3600fda0161aad2185ec7a12b560044cc171 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/662721 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: remove devm_request_and_ioremapDan Willemsen2015-03-18
| | | | Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* Revert "gpu: nvgpu: Enable syncpt reclaim only on gm20b"Timo Alho2015-03-18
| | | | | | | | | | This reverts commit 8eefb93c21934b101d7f423c38d9ea384a45fad6. Bug 1585422 Change-Id: I217e0ffe6c230ee3c63d9aec1c48ce9c41770468 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: http://git-master/r/659426
* gpu: nvgpu: Submit coverity fixesTerje Bergstrom2015-03-18
| | | | | | | | | | Clear ioctl buffer and fix double free, and error case memory leak. Bug 200059216 Change-Id: I21cc2b0f6a7e8fca09f72caf4c54d570b13f400b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/655347
* gpu: nvgpu: Enable syncpt reclaim only on gm20bTerje Bergstrom2015-03-18
| | | | | | | | | | | | | gm20b has more channels than sync points. We use aggressive reclaim of sync points to offset that. Disable aggressive reclaim for gk20a because it is not needed there. Bug 1583849 Change-Id: I2a74b0504150a54cb8a97016effe20c5d905ac95 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/657095 Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: Physical page bits to be per chipTerje Bergstrom2015-03-18
| | | | | | | | | | Retrieve number of physical page bits based on chip. Bug 1567274 Change-Id: I5a0f6a66be37f2cf720d66b5bdb2b704cd992234 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/601700
* gpu: nvgpu: Per-alloc alignmentTerje Bergstrom2015-03-18
| | | | | | | Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/554185 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Retrieve intr & reset id from HWTerje Bergstrom2015-03-18
| | | | | | | | | | | | Query interrupt number and reset id from HW. Use the number from HW when enabling and detecting interrupts. Bug 200036089 Bug 1567274 Change-Id: If9cb4db79a19dcb193ba7ad9db7081f4fe1ab433 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/600988
* gpu: nvgpu: vgpu: fix crash during initAingara Paramakuru2015-03-18
| | | | | | | | | | | | | | | gops->gr.detect_sm_arch was not populated for vgpu. Also, populate some members of the PMU VM struct as they are used to report GPU characteristics to userspace. Bug 1576949 Change-Id: I9ddc361d1418b942da97a82b553aac81f5f51182 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/601931 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add class numbers to characteristicsTerje Bergstrom2015-03-18
| | | | | | | | | | | Some kernel APIs rely on user space knowing class numbers. Allow querying the numbers from kernel. Bug 1567274 Change-Id: Idec2fe8ee983ee74bcbf9dfc98f71bbcc1492cfb Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/594402
* gpu: nvgpu: vgpu: init vm->gmmu_page_sizesAingara Paramakuru2015-03-18
| | | | | | | | | | vm->gmmu_page_sizes was not initialized properly in the vgpu case, leading to gmmu map failures. Bug 1570878 Change-Id: I16c371f65d884f59d9c9f60c7acd391b917d04ed Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
* gpu: nvgpu: vgpu: add PM domain supportAingara Paramakuru2015-03-18
| | | | | | | | | | | | | | vgpu "poweron" and "poweroff" routines now get invoked using the PM domain callbacks, instead of the obsolete gk20a_get_client/gk20a_put_client routines. Bug 1570878 Change-Id: I9a5254936904f75cb3c8a14c2bf5066f919b6588 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/590492 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: GR and LTC HAL to use const structs"Sam Payne2015-03-18
| | | | | | | | | | | This reverts commit 41b82e97164138f45fbdaef6ab6939d82ca9419e. Change-Id: Iabd01fcb124e0d22cd9be62151a6552cbb27fc94 Signed-off-by: Sam Payne <spayne@nvidia.com> Reviewed-on: http://git-master/r/592221 Tested-by: Hoang Pham <hopham@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mitch Luban <mluban@nvidia.com>
* gpu: nvgpu: GR and LTC HAL to use const structsTerje Bergstrom2015-03-18
| | | | | | | | | | | Convert GR and LTC HALs to use const structs, and initialize them with macros. Bug 1567274 Change-Id: Ia3f24a5eccb27578d9cba69755f636818d11275c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/590371
* gpu: nvgpu: allow building as a separate moduleKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | Include object files of gk20a, gm20b and vgpu in the same composite object nvgpu.o in the top-level makefile, and remove the old makefiles. This helps in building the driver as a separate module. Bug 1476801 Change-Id: I93531c0f1a20e46904a429e492f8ed32e4f0c4a1 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/557971 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: Fix vgpu mm code build breakTerje Bergstrom2015-03-18
| | | | | | | | | | | | Some fields were moved to vm specific fields from global mm fields. Fix vgpu's mm code to follow that. Zero page is never allocated in vgpu, so don't free it. Change-Id: Ieabb33f1f004c9ffeeceabf61029b5bafc391889 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/559818 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: vgpu: fix build breakAingara Paramakuru2015-03-18
| | | | | | | | | | | | | Switch struct definitions to use nvgpu version instead of nvhost one. Bug 1509608 Change-Id: Id8c1b0c198536766f0399437bdf2c35c6a6bfe85 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/554027 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add no-op stubs for vgpuKonsta Holtta2015-03-18
| | | | | | | | | | | | Implement empty or -ENOSYS functions for vgpu if CONFIG_TEGRA_GR_VIRTUALIZATION is not enabled, and remove ifdefs around the calling code. Change-Id: Idc75c9bc486d661786bc222bd9e0380aa7766e78 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/552898 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: fix build breakAingara Paramakuru2015-03-18
| | | | | | | | | | | | | runlist_wq has been removed. Bug 1535380 Change-Id: I830037232d6767993dc88a79f540f89239b0334d Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/552567 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support gk20a virtualizationAingara Paramakuru2015-03-18
The nvgpu driver now supports using the Tegra graphics virtualization interfaces to support gk20a in a virtualized environment. Bug 1509608 Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/440122 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>