summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: nvgpu: gk20a: Use NULL instead of integer '0'Amit Sharma (SW-TEGRA)2015-06-19
| | | | | | | | | | | | | | | | Fixed the following sparse warning by using the proper 'NULL' instead of '0': - fifo_gk20a.c: warning: Using plain integer as NULL pointer Bug 200067946 Bug 200088648 Change-Id: I316b119e87b7203450ce85232398b43384ee16cc Signed-off-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com> Reviewed-on: http://git-master/r/755348 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com> Reviewed-on: http://git-master/r/757050
* gpu: nvgpu: remove excessive allow_all checksLeonid Moiseichuk2015-06-12
| | | | | | | | | | | | | | | | The allow_all checks are not required for mode-E snapshot buffers operations. Bug 1573150 Change-Id: I570e70d7ae94b8c9bf2d3e55996442bfe5f71410 Signed-off-by: Leonid Moiseichuk <lmoiseichuk@nvidia.com> Reviewed-on: http://git-master/r/754413 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/755494
* gpu: nvgpu: Check for split_order > max_orderAlex Waterman2015-06-11
| | | | | | | | | | | | | | When choosing an order of buddy to start splitting from (happens when no buddies of the requested alloc order exist) don't sit in the while loop past max_order. This makes no sense and hangs the system. Bug 1647902 Change-Id: I6900597d24944d3170bc76cd75f33794b07707d1 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/756591 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Swap order of free/rb_eraseAlex Waterman2015-06-11
| | | | | | | | | | | | | | | | If rb_erase() is called after __balloc_do_free_fixed() then the rb_tree code crashes when trying to dereference the possibly changed (or poisoned in the case of debugging) data in the rb_node. Change-Id: I4a4456a5ec453fd9ab117c804dc19b2c048a61d4 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/755646 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Ian Stewart <istewart@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/755816 Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove simulation WARAlex Waterman2015-06-11
| | | | | | | | | | | | | | | | | | The WAR put into simulation to avoid a simulator crash can now be removed (c85be1a0968de813fe9b99ebd5c261dcb0ca8875). The first issue with the failing test was found to be GPFIFO entries that were not invalid. Other issues are still present with the test and are fixed in a later commit. Change-Id: I7d3def2e384eede82cfc82b961f09ca23b239d30 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/753378 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/755815 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: WAR for bad GPFIFO entries from userspaceAlex Waterman2015-06-11
| | | | | | | | | | | | | | | | | | Userspace sometimes sends GPFIFO entries with a zero length. This is a problem when the address of the pushbuffer of zero length is larger than 32 bits. The high bits are interpreted as an opcode and either triggers an operation that should not happen or is trated as invalid. Oddly, this WAR is only necessary on simulation. Change-Id: I8be007c50f46d3e35c6a0e8512be88a8e68ee739 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/753379 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/755814 Reviewed-by: Automatic_Commit_Validation_User
* gpu:nvgpu: correct name for unmapped ptes flagsSeshendra Gadagottu2015-06-10
| | | | | | | | | | Bug 1587825 Change-Id: I66f2988b7f1884b53bb8f3cd09ad1ead1652ffda Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/751484 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Do not pre-allocate cmdbuf entriesTerje Bergstrom2015-06-09
| | | | | | | | | | | | | | | Do not preallocate cmdbuf tracking entries. Allocate them only when needed. Bug 200104160 Change-Id: I12f8392723c301a368af1e280893ff993480477f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/743953 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/755148
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: use 64K hole for PMU VMVijayakumar2015-06-06
| | | | | | | | | | | | | | | | | | | With 4K hole T186 PMU does not boot in NS T186 has 64 bit DMA Base. We subtract IMEM offset from GPUVA for PMU boot DMABASE setup It becomes above 4GB because of that So we will use a hole which is bigger than IMEM size. Change-Id: Ib87c39881299a4f5b14e28415195e00800250c46 Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/740656 (cherry picked from commit 6504934d5f90719a5d564174aeb92da90aafbd5b) Reviewed-on: http://git-master/r/747742 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix build problem by including slab.hTerje Bergstrom2015-06-06
| | | | | | | | | Change-Id: I2346ed48bf82c032194264bab999097bf86404e2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/745885 (cherry picked from commit aba5f1ce08c2813c8bd570f36a5d15aa2a1aeaf8) Reviewed-on: http://git-master/r/753286 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: restore 50ms unmap wait for fixed bufsKonsta Holtta2015-06-06
| | | | | | | | | | | | | | | | | | Increase sync-unmap wait time from 5 ms to 50 ms. 6ccac11b4dd1a4eaf9c914fd567cdf7922184e28 decreased the wait tenfold, so this puts it back. Bug 1650025 Bug 200078514 Change-Id: I53a4ea115536ca2ff5d6aa701547c7477ac6e4ea Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/748224 (cherry picked from commit 7c22a24817f0880941e6f4343059fa303ec9eff5) Reviewed-on: http://git-master/r/753285 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Infinite wait for fixed alloc unmapTerje Bergstrom2015-06-06
| | | | | | | | | | | | | | In non-silicon wait infinitely for all jobs to complete before unmapping a fixed allocation. Bug 200078514 Change-Id: I9196afb1d3c5f0c999113a4a17ada2989ac55707 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/744067 (cherry picked from commit 6ccac11b4dd1a4eaf9c914fd567cdf7922184e28) Reviewed-on: http://git-master/r/753284 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Disable channel when updating SMPC WARTerje Bergstrom2015-06-06
| | | | | | | | | | | | | | | When updating SMPC WAR for channel, it needs to be kicked out. This ensures that the updated information is re-read from context header. Bug 1579548 Change-Id: Ia65bdb638cec7125021a8e60c365b83085efe0d4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/741322 Reviewed-on: http://git-master/r/743859 (cherry picked from commit dd6cd54b41d63ae94d066b8d98a40c6f6a2196e5) Reviewed-on: http://git-master/r/753283 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Correction in allow_all flag usagesujeet baranwal2015-06-06
| | | | | | | | | | | | | | | | | The allow_all flag is used to avoid any kind of register's offset being validate when called through regops. but the current implementation was flawed. It printed error messages and set the status of each operation invalid, even when allow_all was set. Change-Id: Ie5a70a3cdc2368715731cf1c9cd771fdcf6b0d57 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/723830 (cherry picked from commit 4483ef020ff5f0fabc83b1226376b02d42bd1d75) Reviewed-on: http://git-master/r/753282 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix compbits mappingSami Kiminki2015-06-06
| | | | | | | | | | | | | | | | Commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff broke compbits mapping. So, let's fix it. Bug 200077571 Change-Id: I02dc150fbcb4cd59660f510adde9f029290efdfb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/745001 (cherry picked from commit 86fc7ec9a05999bea8de320840b962db3ee11410) Reviewed-on: http://git-master/r/753281 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix OOM case for buddy allocatorAlex Waterman2015-06-06
| | | | | | | | | | | | | | | | | | | | | The allocator could attempt to use a buddy list for an order larger than the max order when all the valid buddy lists were empty. This patch ensures that when looking for a particular order buddy that the passed order is valid. Also handle the prints in the no-mem case a litle differently. Only print and update alloc info when there was a successful allocation. Lastly print hex numbers for the allocator stats printing function. Change-Id: If289f3e8925e236e3b7d84206a75bd45a14082a1 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/745071 (cherry picked from commit f3548e67f435975238b55ac152871dcd60a1a907) Reviewed-on: http://git-master/r/753280 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: sysfs queriesAnders Kugler2015-06-06
| | | | | | | | | | | | | | | | Export the gpu's safe fmax_at_vmin frequency so it can be queried from userspace (e.g. PHS). Bug 1566108 Change-Id: I47326588ebd443f189a6051edbf95b35b35636d1 Signed-off-by: Anders Kugler <akugler@nvidia.com> Reviewed-on: http://git-master/r/743501 (cherry picked from commit a977495878a486ca45c7de969582fd9ea949b0f0) Reviewed-on: http://git-master/r/753279 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add zbc support to vgpuRichard Zhao2015-06-06
| | | | | | | | | | | | | | | | | | | For both adding and querying zbc entry, added callbacks in gr ops. Native gpu driver (gk20a) and vgpu will both hook there. For vgpu, it will add or query zbc entry from RM server. Bug 1558561 Change-Id: If8a4850ecfbff41d8592664f5f93ad8c25f6fbce Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/732775 (cherry picked from commit a3787cf971128904c2712338087685b02673065d) Reviewed-on: http://git-master/r/737880 (cherry picked from commit fca2a0457c968656dc29455608f35acab094d816) Reviewed-on: http://git-master/r/753278 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Alignment of data for 64 bit readsujeet baranwal2015-06-06
| | | | | | | | | | | | | | | | The packaging of register's value in 64 bit variable needs the reversal of 32-bit-word. Bug 200083334 Change-Id: Id938f2a2fcffc90ef135ae963ae288c9a655069a Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/744455 (cherry picked from commit dfd3a752ea6a0943be499410010a176756221593) Reviewed-on: http://git-master/r/753277 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use vmalloc only when size >4KTerje Bergstrom2015-06-06
| | | | | | | | | | | | | | | When allocation size is 4k or below, we should use kmalloc. vmalloc should be used only for larged allocations. Introduce nvgpu_alloc, which checks the size, and decides the API to use. Change-Id: I593110467cd319851b27e57d1bfe8d228d3f2909 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/743974 (cherry picked from commit 7f56aa1f0ecafbfde7286353b60e25e494674d26) Reviewed-on: http://git-master/r/753276 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: minor cleanups in cyclestats snapshotsLeonid Moiseichuk2015-06-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two minor cleanups of cyclestats snapshots code implemented: 1. In case of unacceptably small buffer passed as a cyclestats snapshot it causes a kernel panic during list element removal: NvRmGpuTest_Channel_Cyclestats_Snapshot_Gen for 1 clients, each has 4 KB mappings and 1 perfmons [ 304.533073] Unable to handle kernel NULL .... address 00000008 [ 304.541825] pgd = ffffffc04fc9f000 [ 304.545277] [00000008] *pgd=0000000000000000 [ 304.549554] Internal error: Oops: 96000045 [#1] PREEMPT SMPa .... [ 304.584978] PC is at css_gr_free_client_data+0x28/0xe4 [ 304.590105] LR is at gr_gk20a_css_attach+0x6e0/0x700 2. Also fix with improved allocation of perfmon IDs implemented. Bug 1573150 Change-Id: I58b753434141bf573463563fdd699c11ea914943 Signed-off-by: Leonid Moiseichuk <lmoiseichuk@nvidia.com> Reviewed-on: http://git-master/r/751385 (cherry picked from commit e9314c29df3fb708a20fff58cfa64c2ead857b0f) Reviewed-on: http://git-master/r/753275 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cyclestats mode E snapshots supportLeonid Moiseichuk2015-06-06
| | | | | | | | | | | | | | | | | | | | | | | That is a kernel supporting code for cyclestats mode E. Cyclestats mode E implemented following Windows-design in user-space and required the following operations to be implemented: - attach a client for shared hardware buffer of device - detach client from shared hardware buffer - flush means copy of available data from hardware buffer to private client buffers according to perfmon IDs assigned for clients - perfmon IDs management for user-space clients - a NVGPU_GPU_FLAGS_SUPPORT_CYCLE_STATS_SNAPSHOT capability added Bug 1573150 Change-Id: I9e09f0fbb2be5a95c47e6d80a2e23fa839b46f9a Signed-off-by: Leonid Moiseichuk <lmoiseichuk@nvidia.com> Reviewed-on: http://git-master/r/740653 (cherry picked from commit 79fe89fd4cea39d8ab9dbef0558cd806ddfda87f) Reviewed-on: http://git-master/r/753274 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use updated APIs to get/put syncpointDeepak Nibade2015-06-05
| | | | | | | | | | | | | | | | | | Pass host1x device pointer for below APIs : nvhost_get_syncpt_host_managed() nvhost_syncpt_put_ref_ext() Also, compose a name for syncpoints in GPU driver itself. This name will be created as combination of device name and channel index Bug 1611482 Change-Id: Id1ddd0e87e0272ddb0758713d6b6c2544bc36bf4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/751908 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Tested-by: Arto Merilainen <amerilainen@nvidia.com>
* Revert "Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space""""Bharat Nihalani2015-06-04
| | | | | | | | | | | | | | | This reverts commit 2e5803d0f2b7d7a1577a40f45ab9f3b22ef2df80 since the issue seen with bug 200106514 is fixed with change http://git-master/r/#/c/752080/. Bug 200112195 Change-Id: I588151c2a7ea74bd89dc3fd48bb81ff2c49f5a0a Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/752503 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix invalid GPFIFO entriesAlex Waterman2015-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | With the addition of the buddy allocator often times push buffers are allocated by the kernel in high GVA memory regions. These addresses, when written into a GPFIFO entry, have bits set in entry1 of the GPFIFO command. As a result, if no length is set, then these address bits will be interpreted as opcodes by the GPU. The bug fixed by this patch was caused by a wait_cmd being inserted into the GPFIFO with an address of a pushbuffer above 4GB and a zero length. This occured becasue the code that creates the wait_cmd was able to return what appeared to be a valid priv_cmd_entry even though there was nothing in that command. This bug does not appear before the buddy allocator because the FFF allocator always starts allocating from low addresses. As such when a channel's GPFIFO is allocated it gets an address below 32bits. The, because no higher address bits are set, entry1 of the GPFIFO is simply 0 and the GPU trets the command as a no-op. Change-Id: I9c1e600c368b55626e99f6f712f1821148bbb76d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/752080 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space"""Bharat Nihalani2015-06-02
| | | | | | | | | | | This reverts commit ce1cf06b9a8eb6314ba0ca294e8cb430e1e141c0 since it causes GPU pbdma interrupt to be generated. Bug 200106514 Change-Id: If3ed9a914c4e3e7f3f98c6609c6dbf57e1eb9aad Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/749291
* gpu:nvgpu: update channel_setup_ramfc interfaceSeshendra Gadagottu2015-06-02
| | | | | | | | | | | | | Pass flags parameter to channel_setup_ramfc for indicating nvgpu_alloc_gpfifo_args characteristics. Bug 1645628 Change-Id: Ia40b37c5c7b208d459aa84f1b022036dd5e1b599 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/744526 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use HAL for waiting for GR quietTerje Bergstrom2015-06-01
| | | | | | | | | | | | | | | Create a HAL for waiting for GR to become quiet. Use it forall cases where we require GR to be quiet, but where it does not need to be idle. Bug 1640378 Change-Id: Ic0222d595a2d049e0fa8864b069ab94a97fac143 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/745640 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: Support >32bit addresses in simulationTerje Bergstrom2015-06-01
| | | | | | | | Change-Id: I96282b4e047ba8b5369dac039f0f51856c69235b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/747935 (cherry-picked from commit 0bb090745b4122fc4149b1bd6026138a1b9a32bc) Reviewed-on: http://git-master/r/749235
* gpu: nvgpu: drop syncpoint refcount instead of direct freeDeepak Nibade2015-06-01
| | | | | | | | | | | | | Drop host1x syncpoint refcount with nvhost_syncpt_put_ref_ext() instead of freeing it directly Bug 1646883 Change-Id: Ib213e58031a9302e683f8d13ebb4e1f913206464 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/747150 GVS: Gerrit_Virtual_Submit Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: fix channel leak with immediate closeDeepak Nibade2015-05-19
| | | | | | | | | | | | | | | | | | | If a GPU channel is closed immediately after opening without performing any operation on it, we leak that channel e.g. below command leaks a channel echo > /dev/nvhost-gpu Fix this leak by releasing the channel before returning Change-Id: I2598e3cabec6996cb1cf8066a1e6d7d5864ae02b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/743235 (cherry picked from commit 428771509b4431ebe88e38061b495cabc5192327) Reviewed-on: http://git-master/r/744279 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* Revert "Revert "gpu: nvgpu: New allocator for VA space""Alex Waterman2015-05-19
| | | | | | | | | | | | | | This reverts commit 7eb42bc239dbd207208ff491c3fb65c3d83274d8. The original commit was actually fine. Change-Id: I564ce6530ac73fcfad17dcec9c53f0353b4f02d4 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/743300 (cherry picked from commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff) Reviewed-on: http://git-master/r/743301 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: wait for running jobs to finish before shutdownDeepak Nibade2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | In gk20a_pm_shutdown(), we currently call __pm_runtime_disable() which prevents h/w access to new requests made after shutdown() call Also, once gk20a_pm_shutdown() completes, platform code will just rail gate the GPU But it is possible that some other thread is already accessing h/w while shutdown() was triggered and this can result in hang Hence, wait until all currently executing jobs are finished before returning from gk20a_pm_shutdown() Also, we need to wait for GPU's usage count to become 1 since platform code will increase the usage count and then call shutdown(). Hence usage count of 1 indicates that GPU is idle Bug 200099940 Change-Id: I1f2457829e2737c07302d13f355353a30c3b4e67 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/734920 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add secure gpccs boot supportVijayakumar2015-05-18
| | | | | | | | | | | | | | | | | bug 200080684 keeping it disabled by default also trimming the code by removing redundant variable to check recovery. pmu quick wait now checks only for irqs which are serviced by kernel. requests pmu to bit bang gpccs ucode. Change-Id: I12ef23d6d59b507e86a129b69eab65b21d0438c6 Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/729622 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Optimize validate_fixed_bufferSami Kiminki2015-05-18
| | | | | | | | | | | | Function validate_fixed_buffer used to do a linear search for collision detection of already mapped buffers. Optimize this by doing a nice logarithmic search instead. Change-Id: Ifbf2ec015741d44883da27bc6f8cc090c48da145 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/739682 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: Fix gk20a shutdown issue"Deepak Nibade2015-05-18
| | | | | | | | | | | | | | | | | | This reverts commit c18edd3686115ca0b7d8bb08b35f23264f865358. Proper fixes for shutdown issue are being added with below changes http://git-master/r/#/c/738509/ http://git-master/r/#/c/734920/ Hence revert this workaround Bug 200099940 Change-Id: I74b29c804af2bdb9d95c6b93c5308a323575ae57 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/739082 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: jump to fail path if pm_runtime_get_sync() failsDeepak Nibade2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we execute pm_runtime_get_sync() and then gk20a_scale_notify_busy() without checking return value of pm_runtime_get_sync() In case of shutdown of GPU is already initiate, we get a hard hang due to this as per below sequence : - one thread invokes GPU shutdown and then forcibly rail gates the GPU - another thread (unaware of shutdown) calls gk20a_busy() - since runtime PM is disabled in shutdown path, pm_runtime_get_sync() fails - but we still go on running gk20a_scale_notify_busy() which tries to access some GPU registers and hangs Fix this by jumping to failure path in case pm_runtime_get_sync() fails Bug 200099940 Change-Id: I022f2dfa9408f640fb44e6f4b10a437688779c0a Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/738509 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use vzalloc for mm entriesKerwin Wan2015-05-18
| | | | | | | | | | | | | | | When system is in low memory, kzalloc will fail if kernel requests more than PAGE_SIZE continous memory block. Bug 200096099 Change-Id: I44e217ffa6aa6c453a4d4afba45a8ee3b5756cc1 Signed-off-by: Kerwin Wan <kerwinw@nvidia.com> Reviewed-on: http://git-master/r/732197 (cherry picked from commit 62861976421415f93e98a0a9f977ac1f66046714) Reviewed-on: http://git-master/r/737057 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
* gpu: nvgpu: protect missing sgl in gk20a_mem_physKonsta Holtta2015-05-18
| | | | | | | | | | | | | | | | | Return zero for missing sgl (sgt is already checked) instead of attempting to dereference NULL. Those NULL conditions should be almost nonexistent, and zero is not normally used. When reading gk20a_mem_phys() in gk20a_gr_get_chid_from_ctx() from an isr, the mem desc may race with channel deletion and get suddendly zeroed, even if the channel's in_use flag would be set. Plain zero results in expected behaviour. Change-Id: I7033979091951cba3e3004ddc7550cd327ad0baf Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/737759 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement compbits mappingSami Kiminki2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | | Implement NVGPU_AS_IOCTL_GET_BUFFER_COMPBITS_INFO for requesting info on compbits-mappable buffers; and NVGPU_AS_IOCTL_MAP_BUFFER_COMPBITS, which enables mapping compbits to the GPU address space of said buffers. This, subsequently, enables moving comptag swizzling from GPU to CDEH/CDEV formats to userspace. Compbits mapping is conservative and it may map more than what is strictly needed. This is because two reasons: 1) mapping must be done on small page alignment (4kB), and 2) GPU comptags are swizzled all around the aggregate cache line, which means that the whole cache line must be visible even if only some comptag lines are required from it. Cache line size is not necessarily a multiple of the small page size. Bug 200077571 Change-Id: I5ae88fe6b616e5ea37d3bff0dff46c07e9c9267e Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/719710 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: tegra gpu to emc frequency mappingAnders Kugler2015-05-18
| | | | | | | | | | | | | | | o emc clock scaling (bug fix): Take the gpu load into account for gpu frequencies less than or equal to fmax @ Vmin. Bug 1591643 Change-Id: I0298adfdd4b7111557907c3bd6022fd6005355f0 Signed-off-by: Anders Kugler <akugler@nvidia.com> Reviewed-on: http://git-master/r/735846 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Dynamic betacb sizeTerje Bergstrom2015-05-18
| | | | | | | | | | | Allow querying and setting default betacb size via debugfs. For global buffers the value takes effect upon first boot of GPU, and has no effect after that. Bug 1628352 Change-Id: Ib63f4299249c41eab1b36cc501b525cc54211195 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/733328
* gpu: nvgpu: Fix gk20a shutdown issueSumit Singh2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | With CONFIG_PM_GENERIC_DOMAINS_OF enabled, device reboot was getting hung while shutting-down gk20a. It was happening because genpd_dev_pm_detach() was railgating gk20a while other thread was still accessing it. So, assigning NULL to dev->pm_domain->detach for gk20a, so that genpd_dev_pm_detach() is not called during gk20a shutdown, which will not railgate it. This patch will be reverted once we have clean shutdown for gk20a. Bug 200070810 Bug 200099940 Change-Id: Ie2e89ea01a98a9d4f2f68a3ab07b6923ffa374f6 Signed-off-by: Sumit Singh <sumsingh@nvidia.com> Reviewed-on: http://git-master/r/735455 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Combine delays with GK20A parametersAlex Frid2015-05-18
| | | | | | | | | | Specified locking timeout and IDDQ exit delay as GK20A PLL parameters, and used this data instead of hard-coded numbers. Change-Id: I59e16ed11fdba6911f2751195d182e68aed96851 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/735481 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: fix setting gr_pd_ab_dist_cfg1_r()David Li2015-05-18
| | | | | | | | | | | | | gr_*__set_alpha_circular_buffer_size() left max_batches field of gr_pd_ab_dist_cfg1_r as 0 which results in too many alpha beta transitions and poor performance when tessellation or geometry shaders are used Change-Id: If18feb1119e9672005455155dc56337cd444a1f1 Signed-off-by: David Li <davli@nvidia.com> Reviewed-on: http://git-master/r/735476 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: dbg level for per-write ctx patch msgKonsta Holtta2015-05-18
| | | | | | | | | | | | | The message "per-write ctx patch begin?" is a legacy message for warning about probably inefficient code, but it's written at error loglevel. Silence it out a bit by using gk20a_dbg_info(). The inefficient paths can be fixed later. Bug 200075565 Change-Id: Idae821aef3001ea5016de22a1a87fec747c42d31 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/734248
* gpu: nvgpu: check sync existence in channel updateKonsta Holtta2015-05-18
| | | | | | | | | | | | | | | The channel sync object can get deleted before all channel updates have finished if the channel is freed before them, so work around a null dereference by testing if the sync exists. Channel and/or c->sync refcounting would be necessary for proper fix. Bug 200076344 Change-Id: Ica8ef2df9cd95cfa593cd4f41768dbb6641357b2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/734266 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: updated gpmu interface data struct.Mahantesh Kumbar2015-05-18
| | | | | | | | | | | | | | | | | | | | - pmu version 19494277 is from CL 19495746 - updated gpmu interface data struct with respect to latest pmu ucode interface headers. gpmuifpg.h - 19199047 gpmuifperfmon.h - 18238819 gpmuifpmu.h - 19199047 gpmuifacr.h - 19343196 gpmuifcmn.h - 19264862 rmflcnbl.h - 19317152 Bug 200085428 Change-Id: I7db56dcf5a3038b40da37a69e8723a2e9a652e4b Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/728461 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: zbc: disable activity only from ioctlTerje Bergstrom2015-05-18
| | | | | | | | | | | | | Move the fifo engine activity disabling and wait-for-idle from the lowest-level functions higher, into the ioctl path of zbc operations, so that the sw initialization path wouldn't call them. During the init path, the disable isn't necessary, and the code path could result in a deadlock in the fifo runlist mutex. Change-Id: Icf5c270ba29bc1c7f88874fba2d176d68e11278a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/733668