summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/vgpu
Commit message (Collapse)AuthorAge
* gpu: nvgpu: vgpu: add regops supportRichard Zhao2016-01-10
| | | | | | | | | | | | | | | | | Added new RM Server command for regops. JIRA VFND-1128 Bug 1700139 Change-Id: Ia1cc63e993c29c91f87440c241077fa91edb9e53 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/923235 (cherry picked from commit 7de22e42cfd2e419ad64178b9f1f1ee16273bd03) Reviewed-on: http://git-master/r/841330 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add SM exception supportRichard Zhao2016-01-10
| | | | | | | | | | | | | | | | | | When TEGRA_VGPU_GR_INTR_SM_EXCEPTION comes, post debugger event. Bug 1594604 JIRA VFND-1120 Change-Id: I7229c3994220a7c6f117d38a1af2e766187a47c6 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/923234 (cherry picked from commit bdd414d9366133380a202d88b1a50038b70c068d) Reviewed-on: http://git-master/r/840646 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add set sm debug mode supportRichard Zhao2016-01-10
| | | | | | | | | | | | | | JIRA VFND-1006 Bug 1594604 Change-Id: If6eb7ae22b5b0557faddd3d68deb791abb24bec4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/923233 (cherry picked from commit 9e14ca393c3044be702c50524a9ef3a2c3a6270c) Reviewed-on: http://git-master/r/841866 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: preempt before adjusting fencesDeepak Nibade2015-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Current sequence in gk20a_disable_channel() is - disable channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() - preempt channel But this leads to scenarios where syncpoint has min > max value Hence to fix this, make sequence in gk20a_disable_channel() - disable channel in gk20a_channel_abort() - preempt channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() If gk20a_channel_abort() is called from other API where preemption is not needed, then use channel_preempt flag and do not preempt channel in those cases Bug 1683059 Change-Id: I4d46d4294cf8597ae5f05f79dfe1b95c4187f2e3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/921290 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Immediate channel releaseTerje Bergstrom2015-12-10
| | | | | | | | | | | | When closing channel, disable and preempt it immediately instead of waiting for it to finish all work. Bug 1683059 Change-Id: Ia5f5fc6a072dc3ddb1e9bf63534814ff0a60b5b4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/836746
* gpu: nvgpu: vgpu: add set mmu debug mode supportRichard Zhao2015-12-04
| | | | | | | | | | | | | JIRA VFND-1005 Bug 1594604 Change-Id: Ic159a1aff9cee508194f1f5dff7a16eb0e47ad64 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/833498 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: set aggressive_sync_destroy at runtimeDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | We currently set "aggressive_destroy" flag to destroy sync object statically and for each sync object Move this flag to per-platform structure so that it can be set per-platform for all the sync objects Also, set the default value of this flag as "false" and set it to "true" once we have more than 64 channels in use Bug 200141116 Change-Id: I1bc271df4f468a4087a06a27c7289ee0ec3ef29c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/822041 (cherry picked from commit 98741e7e88066648f4f14490c76b61dbff745103) Reviewed-on: http://git-master/r/835800 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: User-space managed address space supportSami Kiminki2015-11-18
| | | | | | | | | | | | | | | | | | | | | | | | | | Implement NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED, which enables creating userspace-managed GPU address spaces. When an address space is marked as userspace-managed, the following changes are in effect: - Only fixed-address mappings are allowed. - VA space allocation for fixed-address mappings is not required, except to mark space as sparse. - Maps and unmaps are always immediate. In particular, the mapping ref increments at kickoffs and decrements at job completion are skipped. Bug 1614735 Bug 1623949 Bug 1660392 Change-Id: I834fe19b3f65e9b02c268952383eddee0e465759 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/738558 Reviewed-on: http://git-master/r/833253 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add tsg initializationRichard Zhao2015-11-13
| | | | | | | | | | | | | | It fixed kernel dump when run CUDA L0 test. Bug 1594604 Change-Id: Ic986b34629052e915f4ccc5a5b6df198afaf2ff9 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/831391 (cherry picked from commit 43d4ba4d6ffc6043e8425dc40967975afe3a95f1) Reviewed-on: http://git-master/r/832416 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: Alloc kernel address spaceTerje Bergstrom2015-10-22
| | | | | | | | | | | JIRA VFND-890 Change-Id: I8eba041b663cead94f2cc3d75d6458d472f1a755 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/815378 (cherry picked from commit 4b52329e955758ec4368abcb463ce4e3a2653237) Reviewed-on: http://git-master/r/820499
* gpu: nvgpu: vgpu: fix notification handlingAingara Paramakuru2015-10-22
| | | | | | | | | | | | | | | | | | | | Take a channel ref when handling a notification from the server, to prevent the channel from being closed. Also, mark the channel as faulted before calling g20a_channel_abort, to keep the semantics the same as the native driver. Bug 1653186 Change-Id: I0cb8ce7bad22a4d508eade6ff63a412296a02fc9 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/811885 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/817021 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: re-factor gr ctx managementAingara Paramakuru2015-10-22
| | | | | | | | | | | | | | | | Move the gr ctx management to the GPU HAL. Also, add support for a new interface to allocate gr ctxsw buffers. Bug 1677153 Change-Id: I5a7980acf4de0de7dbd94b7dd20f91a6196dc989 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/806961 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/817009 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: T18x supportAingara Paramakuru2015-09-29
| | | | | | | | | | | | | | Add vgpu framework and build for T18x. Bug 1677153 JIRA VFND-693 Change-Id: Icf9fd8e0b5769228aee59c54f9b000b992e5fcca Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/792559 Reviewed-on: http://git-master/r/806178 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Separate kernel and user GPU VA regionsSami Kiminki2015-09-07
| | | | | | | | | | | | | | | | | Separate the kernel and userspace regions in the GPU virtual address space. Do this by reserving the last part of the GPU VA aperture for the kernel, and extend GPU VA aperture accordingly for regular address spaces. This prevents the kernel polluting the userspace-visible GPU VA regions, and thus, makes the success of fixed-address mapping more predictable. Bug 200077571 Change-Id: I63f0e73d4c815a4a9fa4a9ce568709974690ef0f Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/747191 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add t210 gm20b supportRichard Zhao2015-08-19
| | | | | | | | | | | | | | - add hal initializaiton - create folders vgpu/gk20a and vgpu/gm20b for specific code Bug 1653185 Change-Id: If94d45e22a1d73d2e4916673736cc29751be4e40 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/774148 GVS: Gerrit_Virtual_Submit Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Implement priv pagesTerje Bergstrom2015-07-03
| | | | | | | | Implement support for privileged pages. Use them for kernel allocated buffers. Change-Id: I720fc441008077b8e2ed218a7a685b8aab2258f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/761919
* gpu: nvgpu: Initial MAP_BUFFER_BATCH implementationSami Kiminki2015-06-30
| | | | | | | | | | | | | | | | | | | Add batch support for mapping and unmapping. Batching essentially helps transform some per-map/unmap overhead to per-batch overhead, namely gk20a_busy()/gk20a_idle() calls, GPU L2 flushes, and GPU TLB invalidates. Batching with size 64 has been measured to yield >20x speed-up in low-level fixed-address mapping microbenchmarks. Bug 1614735 Bug 1623949 Change-Id: Ie22b9caea5a7c3fc68a968d1b7f8488dfce72085 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/733231 (cherry picked from commit de4a7cfb93e8228a4a0c6a2815755a8df4531c91) Reviewed-on: http://git-master/r/763812 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: support additional notificationsAingara Paramakuru2015-06-22
| | | | | | | | | | | | | | | | | Client notification support is now added for the following: - stalling and non-stalling GR sema release - non-stalling FIFO channel intr - non-stalling CE2 nonblockpipe intr Bug 200097077 Change-Id: Icd3c076d7880e1c9ef1fcc0fc58eed9f23f39277 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/736064 (cherry picked from commit 0585d1f14d5a5ae1ccde8ccb7b7daa5593b3d1bc) Reviewed-on: http://git-master/r/759824 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: add zbc support to vgpuRichard Zhao2015-06-06
| | | | | | | | | | | | | | | | | | | For both adding and querying zbc entry, added callbacks in gr ops. Native gpu driver (gk20a) and vgpu will both hook there. For vgpu, it will add or query zbc entry from RM server. Bug 1558561 Change-Id: If8a4850ecfbff41d8592664f5f93ad8c25f6fbce Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/732775 (cherry picked from commit a3787cf971128904c2712338087685b02673065d) Reviewed-on: http://git-master/r/737880 (cherry picked from commit fca2a0457c968656dc29455608f35acab094d816) Reviewed-on: http://git-master/r/753278 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space""""Bharat Nihalani2015-06-04
| | | | | | | | | | | | | | | This reverts commit 2e5803d0f2b7d7a1577a40f45ab9f3b22ef2df80 since the issue seen with bug 200106514 is fixed with change http://git-master/r/#/c/752080/. Bug 200112195 Change-Id: I588151c2a7ea74bd89dc3fd48bb81ff2c49f5a0a Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/752503 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space"""Bharat Nihalani2015-06-02
| | | | | | | | | | | This reverts commit ce1cf06b9a8eb6314ba0ca294e8cb430e1e141c0 since it causes GPU pbdma interrupt to be generated. Bug 200106514 Change-Id: If3ed9a914c4e3e7f3f98c6609c6dbf57e1eb9aad Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/749291
* gpu:nvgpu: update channel_setup_ramfc interfaceSeshendra Gadagottu2015-06-02
| | | | | | | | | | | | | Pass flags parameter to channel_setup_ramfc for indicating nvgpu_alloc_gpfifo_args characteristics. Bug 1645628 Change-Id: Ia40b37c5c7b208d459aa84f1b022036dd5e1b599 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/744526 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "gpu: nvgpu: New allocator for VA space""Alex Waterman2015-05-19
| | | | | | | | | | | | | | This reverts commit 7eb42bc239dbd207208ff491c3fb65c3d83274d8. The original commit was actually fine. Change-Id: I564ce6530ac73fcfad17dcec9c53f0353b4f02d4 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/743300 (cherry picked from commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff) Reviewed-on: http://git-master/r/743301 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: New allocator for VA space"Terje Bergstrom2015-05-12
| | | | | | | | | | | This reverts commit 2e235ac150fa4af8632c9abf0f109a10973a0bf5. Change-Id: I3aa745152124c2bc09c6c6dc5aeb1084ae7e08a4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/741469 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
* gpu: nvgpu: New allocator for VA spaceAlex Waterman2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a new buddy allocation scheme for the GPU's VA space. The bitmap allocator was using too much memory and is not a scaleable solution as the GPU's address space keeps getting bigger. The buddy allocation scheme is much more memory efficient when the majority of the address space is not allocated. The buddy allocator is not constrained by the notion of a split address space. The bitmap allocator could only manage either small pages or large pages but not both at the same time. Thus the bottom of the address space was for small pages, the top for large pages. Although, that split is not removed quite yet, the new allocator enables that to happen. The buddy allocator is also very scalable. It manages the relatively small comptag space to the enormous GPU VA space and everything in between. This is important since the GPU has lots of different sized spaces that need managing. Currently there are certain limitations. For one the allocator does not handle the fixed allocations from CUDA very well. It can do so but with certain caveats. The PTE page size is always set to small. This means the BA may place other small page allocations in the buddies around the fixed allocation. It does this to avoid having large and small page allocations in the same PDE. Change-Id: I501cd15af03611536490137331d43761c402c7f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/740694 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use common allocator for patchTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: Idf51831e8be9cabe1ab9122b18317137fde6339f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/721030 Reviewed-on: http://git-master/r/737530 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com>
* gpu: nvgpu: Use common allocator for contextTerje Bergstrom2015-04-04
| | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: I10c226e2377aa867a5cf11be61d08a9d67206b1d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/720507
* gpu: nvgpu: add platform specific get_iova_addr()Deepak Nibade2015-04-04
| | | | | | | | | | | | | | | | | | Add platform specific API pointer (*get_iova_addr)() which can be used to get iova/physical address from given scatterlist and flags Use this API with g->ops.mm.get_iova_addr() instead of calling API gk20a_mm_iova_addr() which makes it platform specific Bug 1605653 Change-Id: I798763db1501bd0b16e84daab68f6093a83caac2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/713089 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add new attributesAingara Paramakuru2015-04-04
| | | | | | | | | | | | Add support for reading num FBPs and FBP enable mask. Bug 1621056 Change-Id: I92ec1123373308ed280d4ffd30fe77ae6073ac45 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/715826 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu: nvgpu: Add DT support for gpu power-domainSumit Singh2015-04-04
| | | | | | | | | | | | | First, defining a new structure to support gk20a power domain. Then making necessary modifications to add so as to add DT support for gpu power-domain. bug 200070810 Change-Id: I29e1c24b181e14743d3969103abfd1882d171f07 Signed-off-by: Sumit Singh <sumsingh@nvidia.com> Reviewed-on: http://git-master/r/668973 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: vgpu: remove explicit TLB invalidateAingara Paramakuru2015-04-04
| | | | | | | | | | | | | | The server does an implicit TLB invalidate after map and unmap operations. Bug 1616964 Change-Id: Ib6f4a23389f1e5d796d0f4b0be312f438c52927c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/713221 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: implement to initialize gr->gpc_tpc_maskHaley Teng2015-04-04
| | | | | | | | | | | | Bug 1509609 Change-Id: Ia78bd49518b41bc9f59e3d47a1390b126c7a2230 Signed-off-by: Haley Teng <hteng@nvidia.com> Reviewed-on: http://git-master/r/706861 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Jubeom Kim <jubeomk@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor page mapping codeTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Pass always the directory structure to mm functions instead of pointers to members to it. Also split update_gmmu_ptes_locked() into smaller functions, and turn the hard coded MMU levels (PDE, PTE) into run-time parameters. Change-Id: I315ef7aebbea1e61156705361f2e2a63b5fb7bf1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/672485 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Unify PDE & PTE structsTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce a new struct gk20a_mm_entry. Allocate and store PDE and PTE arrays using the same structure. Always pass pointer to this struct when possible between functions in memory code. Change-Id: Ia4a2a6abdac9ab7ba522dafbf73fc3a3d5355c5f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/696414
* gpu: nvgpu: vgpu: fix AS splitAingara Paramakuru2015-04-04
| | | | | | | | | | | | | | | The GVA was increased to 128GB but for vgpu, the split was not updated to reflect the correct small and large page split (16GB for small pages, rest for large pages). Bug 1606860 Change-Id: Ieae056d6a6cfd2f2fc5066d33e1247d2a96a3616 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/681340 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: TLB invalidate after map/unmapTerje Bergstrom2015-04-04
| | | | | | | | | | | Always invalidate TLB after mapping or unmapping, and remove the delayed TLB invalidate. Change-Id: I6df3c5c1fcca59f0f9e3f911168cb2f913c42815 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/696413 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Set compression page per SoCTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Compression page size varies depending on architecture. Make it 129kB on gk20a and gm20b. Also export some common functions from gm20b. Bug 1592495 Change-Id: Ifb1c5b15d25fa961dab097021080055fc385fecd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/673790
* gpu: nvgpu: vgpu: handle fifo and gr exceptionsAingara Paramakuru2015-04-04
| | | | | | | | | | | | Handle the gr and fifo exceptions delivered from the server and update the channel state as needed. Bug 1551865 Change-Id: Ie19626c6e8a72f92ffd134983fe6d84e5c6c8736 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/670329 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move debug dump to HALAingara Paramakuru2015-04-04
| | | | | | | | | | | | Move the debug dump to HAL and add a stub for vgpu. Bug 1595164 Change-Id: Ifdcdd8a8caca7a41919dad075fee1c87032f53b0 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/662722 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: fix comptag alloc failureAingara Paramakuru2015-04-04
| | | | | | | | | | | | | setup_buffer_kind_and_compression() expects vm->big_page_size to be set, which was not done for the vgpu case. Bug 200064162 Change-Id: I15af3600fda0161aad2185ec7a12b560044cc171 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/662721 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: remove devm_request_and_ioremapDan Willemsen2015-03-18
| | | | Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* Revert "gpu: nvgpu: Enable syncpt reclaim only on gm20b"Timo Alho2015-03-18
| | | | | | | | | | This reverts commit 8eefb93c21934b101d7f423c38d9ea384a45fad6. Bug 1585422 Change-Id: I217e0ffe6c230ee3c63d9aec1c48ce9c41770468 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: http://git-master/r/659426
* gpu: nvgpu: Submit coverity fixesTerje Bergstrom2015-03-18
| | | | | | | | | | Clear ioctl buffer and fix double free, and error case memory leak. Bug 200059216 Change-Id: I21cc2b0f6a7e8fca09f72caf4c54d570b13f400b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/655347
* gpu: nvgpu: Enable syncpt reclaim only on gm20bTerje Bergstrom2015-03-18
| | | | | | | | | | | | | gm20b has more channels than sync points. We use aggressive reclaim of sync points to offset that. Disable aggressive reclaim for gk20a because it is not needed there. Bug 1583849 Change-Id: I2a74b0504150a54cb8a97016effe20c5d905ac95 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/657095 Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: Physical page bits to be per chipTerje Bergstrom2015-03-18
| | | | | | | | | | Retrieve number of physical page bits based on chip. Bug 1567274 Change-Id: I5a0f6a66be37f2cf720d66b5bdb2b704cd992234 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/601700
* gpu: nvgpu: Per-alloc alignmentTerje Bergstrom2015-03-18
| | | | | | | Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/554185 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Retrieve intr & reset id from HWTerje Bergstrom2015-03-18
| | | | | | | | | | | | Query interrupt number and reset id from HW. Use the number from HW when enabling and detecting interrupts. Bug 200036089 Bug 1567274 Change-Id: If9cb4db79a19dcb193ba7ad9db7081f4fe1ab433 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/600988
* gpu: nvgpu: vgpu: fix crash during initAingara Paramakuru2015-03-18
| | | | | | | | | | | | | | | gops->gr.detect_sm_arch was not populated for vgpu. Also, populate some members of the PMU VM struct as they are used to report GPU characteristics to userspace. Bug 1576949 Change-Id: I9ddc361d1418b942da97a82b553aac81f5f51182 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/601931 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>