summaryrefslogtreecommitdiffstats
path: root/drivers
Commit message (Collapse)AuthorAge
* gpu: nvgpu: make default vidmem page size of 64kDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | Allocate 64k pages for vidmem by default Also make sure that base address of vidmem is aligned to page size Jira DNVGPU-20 Change-Id: Ie2e5111f942467754db5b45f1518d72c925d3d19 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1206405 (cherry picked from commit 542ebf7f571ba6dc631466e562f7d8e05df4a9a6) Reviewed-on: http://git-master/r/1210958 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use vidmem for page tables if availableKonsta Holtta2016-09-01
| | | | | | | | | | | | | | | Use the common gk20a_gmmu_alloc() that tries vidmem too. Jira DNVGPU-20 Change-Id: I4ea02bc4962d299c6f71444048d4a2a22bd80f55 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1206404 (cherry picked from commit 7297727cce8c5c7b26f82afe98cc5428135b4777) Reviewed-on: http://git-master/r/1178831 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add new API to get base address for sysmem/vidmem buffersDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | Add new API gk20a_mem_get_base_addr() which will return vidmem base address in case of vidmem and IOVA address in case of sysmem Even though vidmem allocations are non-contiguous, this API is useful (and should only be used) for allocations with one chunk (e.g. page tables) Also, since page tables could either reside in sysmem or vidmem, use this API to get address of page tables Jira DNVGPU-20 Change-Id: Ie04af9ca7bfccfec1a8a8e4be2c507cef5cef8e1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1206403 (cherry picked from commit a8c74dc188878f2948fa1e0e47bf1837fba6c5e0) Reviewed-on: http://git-master/r/1210957 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allocate blob space earlyDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | Allocting blob space for pmu might need fixed address allocation in vidmem and during boot up But if some page tables are allocated before blob space, blob space allocation could fail Fix this by allocating blob space early during boot up Jira DNVGPU-20 Change-Id: I30eca1023c8f8f8be101bb7e160ba57a7040911a Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1206402 (cherry picked from commit fad4309ce345ed3879f497bda27f2eceb1084dbb) Reviewed-on: http://git-master/r/1210956 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add proper timeout handling for vidmem clear operationsLakshmanan M2016-09-01
| | | | | | | | | | | | | | | | gk20a_fence_wait() api may be interrupted by a signal before actual its timeout elapsed. This CL does retry (-ERESTARTSYS) mechanism if gk20a_fence_wait() return before its timeout elapsed. Bug 200230544 Change-Id: I347ed2004935a8b9413f95dcb6fca2b74bf49f2a Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1206265 (cherry picked from commit d3ef533942487785d84d109f985ae648eb3c2434) Reviewed-on: http://git-master/r/1210955 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use vidmem for gpfifos if availableKonsta Holtta2016-09-01
| | | | | | | | | | | | | | | Use the common gk20a_gmmu_alloc() that tries vidmem too. Jira DNVGPU-21 Change-Id: Ie22cb0f5ed70ec71567fc85d348b3526c9a32b02 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1204304 (cherry picked from commit 07cb99baeb10194c520addd77517841a6f99df93) Reviewed-on: http://git-master/r/1169310 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support pramin access for non-contiguous vidmemDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | API pramin_access_batched() currenly only supports contiguous allocations. Modify this API to support non-contiguous allocations from page allocator as well Update gk20a_mem_wr32() and gk20a_mem_rd32()to reuse pramin_access_batched() Use gk20a_memset() in gk20a_gmmu_free_attr_vid() to clear vidmem pages for kernel buffers Jira DNVGPU-30 Change-Id: I43630912f4837d8ebc6b9c58f4f427218ef9725b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1204303 (cherry picked from commit 2f84f141d02fd2f641cb18a48896fb3ae5f7e51f) Reviewed-on: http://git-master/r/1210954 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: track allocator and user for each memDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | Store allocator pointer for each mem_desc This pointer should be used while freeing the mem instead of assuming a common allocator Add flag user_mem to mem_desc which will be set only in case of User vidmem allocations We will delay free of mem in worker only if this flag is set on mem. Otherwise, we will free it immediately This is needed so that all kernel allocations can work with both sysmem and vidmem Jira DNVGPU-84 Change-Id: Ib9a9209b164bc56b7880448f86bd6d42b324cc86 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1203099 (cherry picked from commit 8f0b0122f36a0b6f1932fa9a98d7eb03b1f623d1) Reviewed-on: http://git-master/r/1210953 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix memory leak in case of failureDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | In __gk20a_alloc_pages(), if we fail to allocate a chunk we free previously allocated chunks in error path But we do not free up the memory reserved in those chunks which could lead to OOM situations Fix this by calling gk20a_free() for each chunk in error path Jira DNVGPU-96 Change-Id: I68aa18d68a5282405016e688c790ccbc0c2a0d69 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1203098 (cherry picked from commit f096bd1675600f4e2fc2d686f2911bb945fbbf0b) Reviewed-on: http://git-master/r/1210952 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: disable sync_fence for CE jobsDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | We do not need sync_fence for CE jobs submitted in gk20a_ce_execute_ops() since all the waiters of fence are in kernel space only Jira DNVGPU-84 Change-Id: Idad6c40abcefb86e60a5327bbbff6827b1ca33cc Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1201347 (cherry picked from commit e294b2d37cf79182bb9a255adb188eb6afa47c27) Reviewed-on: http://git-master/r/1210951 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clear vidmem buffers in workerDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We clear buffers allocated in vidmem in buffer free path. But to clear buffers, we need to submit CE jobs and this could cause issues/races if free called from critical path Hence solve this by moving buffer clear/free to a worker gk20a_gmmu_free_attr_vid() will now just put mem_desc into a list and schedule a worker And worker thread will traverse the list and clear/free the allocations In struct gk20a_vidmem_buf, mem variable is statically allocated. But since we delay free of mem, convert this variable into a pointer and allocate it dynamically Since we delay free of vidmem memory, it is now possible to face OOM conditions during allocations. Hence while allocating block until we have sufficient memory available with an upper limit of 1S Jira DNVGPU-84 Change-Id: I7925590644afae50b6fc04c6e1e43bbaa1c220fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1201346 (cherry picked from commit b4dec4a30de2431369d677acca00e420f8e581a5) Reviewed-on: http://git-master/r/1210950 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clear whole vidmem on first allocationDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently clear vidmem pages in gk20a_gmmu_alloc_attr_vid_at() i.e. allocation path for each buffer But since buffer allocation path could be latency critical, clear whole vidmem first and before first User allcation in gk20a_vidmem_buf_alloc() And then clear buffer pages while releasing the buffer In this way, we can ensure that vidmem pages are already cleared during buffer allocation path At a later stage, clearing of pages can be removed from free path and moved to a separate worker as well At this point, first allocation has overhead of clearing whole vidmem which takes about 380mS and this should improve once clocks are raised. Also, this is one time larency, and subsequent allocations should not have any overhead for clearing at all Add API gk20a_vidmem_clear_all() to clear whole vidmem We have WPR buffers allocated during boot up and at fixed address in vidmem. To prevent overwriting to these buffers in gk20a_vidmem_clear_all(), clear whole vidmem except for the bootstrap allocator carveout Add new API gk20a_gmmu_clear_vidmem_mem() to clear one mem_desc Jira DNVGPU-84 Change-Id: I5661700585c6241a6a1ddeb5b7c068d3d2aed4b3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1194301 (cherry picked from commit 950ab61a04290ea405968d8b0d03e3bd044ce83d) Reviewed-on: http://git-master/r/1193158 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add a bootstrap vidmem allocatorAlex Waterman2016-09-01
| | | | | | | | | | | | | | | | Add an allocator for allocating vidmem before the CE has had a chance to be initialized (and clear the rest of vidmem). Jira DNVGPU-84 Change-Id: I5166607a712b3a6eb4c2906b8c7d002c68a6567b Signed-off-by: Alex Waterman <alexw@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1197204 (cherry picked from commit b4e68e84eedd952637b2332d8dc73a9090d6d62e) Reviewed-on: http://git-master/r/1210949 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support GMMU mappings for vidmem page allocatorDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | Switch to use page allocator for vidmem Support GMMU mappings for page (non-contiguous page allocator) in update_gmmu_ptes_locked() If aperture is VIDMEM, traverse each chunk in an allocation and map it to GPU VA separately Fix CE page clearing to support page allocator Fix gk20a_pramin_enter() to get base address from new allocator Define API gk20a_mem_get_vidmem_addr() to get base address of allocation. Note that this API should not be used if we have more than 1 chunk Jira DNVGPU-96 Change-Id: I725422f3538aeb477ca4220ba57ef8b3c53db703 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1199177 (cherry picked from commit 1afae6ee6529ab88cedd5bcbe458fbdc0d4b1fd8) Reviewed-on: http://git-master/r/1197647 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: send only one event to the debuggerCory Perry2016-09-01
| | | | | | | | | | | | | | | | | | | | | | Event notifications on TSGs should only be sent to the channel that caused the event to happen in the first place, not evey channel in the tsg. Any more and the debugger will not be able to tell what channel actually got the event. Worse yet, if all the channels in a tsg are bound to the same debug session (as is the case with cuda-gdb), then multiple nvgpu events for the same gpu event will be triggered, causing events to be buffered and the client to get out of sync. One gpu exception, one nvgpu event per tsg. Bug 1793988 Signed-off-by: Cory Perry <cperry@nvidia.com> Change-Id: I4efb83b0593bd1af38f2342c80793d9db56e42b1 Reviewed-on: http://git-master/r/1194203 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix error handling of __semaphore_bitmap_alloc()Alex Waterman2016-08-31
| | | | | | | | | | | | | | | The return from __semaphore_bitmap_alloc() is an int for which a negative value indicates a failure. That return value was being directly cast to an unsigned int before being checked for a negative error code. This obviously isn't a good idea. Coverity ID 38754 Change-Id: I50c0478e5504988b059e69b929e9c2e465df7cc0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1210317 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix possible overflow in buddy allocatorAlex Waterman2016-08-31
| | | | | | | | | | | | | | | Fix a possible overflow in the buddy allocator's initialization code. In practice it should never happen that pde size is greater than 32bits but this makes coverity happy. Coverity ID 54964 Change-Id: I886fd962bb3e9e328f7305bdcf69827979a39a21 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1210316 GVS: Gerrit_Virtual_Submit Reviewed-by: Sachit Kadle <skadle@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: Use spin_lock for jobs_lockBharat Nihalani2016-08-31
| | | | | | | | | | | | | | | | | | | | This is done to boost performance of the GPU submit time, which is critical for compute use-cases. Bug 200215465 Bug 1804898 Conflicts: drivers/gpu/nvgpu/gk20a/channel_gk20a.c Change-Id: Ic4884ee4eac910b92b84a47fdc1b2e9f26b2f1f0 Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/1199860 Reviewed-on: http://git-master/r/1209834 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix build error when CONFIG_DEBUG_FS=nDavid Pu2016-08-31
| | | | | | | | | | | | | | | | | | adding 'ifdef CONFIG_DEBUG_FS' check to fix following compilation error when CONFIG_DEBUG_FS=n(which is used for Android 'production' build): mm_gk20a.c: In function 'gk20a_mm_debugfs_init': mm_gk20a.c:4824:2: error: implicit declaration of function 'debugfs_create_x64' [-Werror=implicit-function-declaration] Bug 1778001 Change-Id: I785288a37b96c391b84925d5971d2691cf80206e Signed-off-by: David Pu <dpu@nvidia.com> Reviewed-on: http://git-master/r/1210393 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Turn the debug macro back to pr_infoAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | Instead of having the debug prints from the allocators be warnings they should be just regular prints. Bug 1799159 Change-Id: Ic6e3c38fa286c4acd6fcba51dc59158dc2d655fc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1201372 (cherry picked from commit 107caf4ce68a7c76023ee1e66a98c5570f401059) Reviewed-on: http://git-master/r/1208478 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Clarify comment in allocator codeAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | | One of the flags that is defined for allocators has not yet been imlpemented. This clarifies the comment and explains why the flag has been defined even though it is not yet implemented. Bug 1799159 Change-Id: I1e84439d63ca391941cee8e5362ffd9cc959744b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1201371 (cherry picked from commit 8e6566b173f17d9c169a9fa0f6104f4bbf608dc1) Reviewed-on: http://git-master/r/1208477 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add checking in allocator functionsAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | | | | | | | | Add checks to make sure function pointers are valid before attempting to call said function. Also, ensure that any allocator created defines the following 3 functions at minimum: alloc() free() fini() Bug 1799159 Change-Id: I4cd3d5746ccb721c723a161c9487564846027572 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1200059 (cherry picked from commit e26557a49d7ca6629ada24f12a3be396b0ae22cd) Reviewed-on: http://git-master/r/1208476 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add debugging to the semaphore codeAlex Waterman2016-08-30
| | | | | | | | | | | | | | | Add GPU debugging to the semaphore code. Bug 1732449 JIRA DNVGPU-12 Change-Id: I98466570cf8d234b49a7f85d88c834648ddaaaee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1198594 (cherry picked from commit 420809cc31fcdddde32b8e59721676c67b45f592) Reviewed-on: http://git-master/r/1153671 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add gpu_dbg_map_v message typeAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | | | | | | Add a new debug message type: gpu_dbg_map_v. This is used for mapping messages that are not specifically memory map operations. Also cleanup the memory mapping debugging a bit since there was one duplicate print and the memory map print was difficult to parse visually. As a result the message has been modified to put the most important information first in an easily readable format. Bug 1732449 JIRA DNVGPU-12 Change-Id: Ib19c9371ee958009ab5a2d89b9610e699d070ee2 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1198593 (cherry picked from commit 51dba53b06ca171cdb13d1707f2d026b0ce29f07) Reviewed-on: http://git-master/r/1147670 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add semaphore debugging infoAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | Add semaphore debugging information to the gk20a channel state debug dump. Bug 1732449 JIRA DNVGPU-12 Change-Id: I7caafd4f6420e1c478be22e236513603c315ce5e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1198592 (cherry picked from commit 3fa247adf5fdd8c9b16a24fec00903fdc3abc90a) Reviewed-on: http://git-master/r/1133793 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement a vidmem allocatorAlex Waterman2016-08-30
| | | | | | | | | | | | | | | | | | | Implement an allocator suitable for managing the video memory on dGPUs. It works by allocating chunks from an underlying buddy allocator and collating the chunks together (similar to what an sgt does in the wider Linux kernel). This handles the ability to get large buffers in potentially fragmented memory. The GMMU can then obviously map the physical vidmem into contiguous GVA spaces. Jira DNVGPU-96 Change-Id: Ic1d7800b033a170b77790aa23fad6858443d0e89 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1197203 (cherry picked from commit fa44684a843956ae384fef6d7a79b9cbbd04f73e) Reviewed-on: http://git-master/r/1185231 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: do not flush FECS record on engine resetThomas Fleury2016-08-29
| | | | | | | | | | | | | | | | | | | | | | | Flushing timestamp record method can fail in case FECS is not processing the main method queue. In particular, this occurs in case of ctxsw timeout, where we process fifo sched interrupts from the host, but FECS is still waiting for idle (grWFI). In such scenario, this adds huge delay in fifo recovery procedure (timeout on FECS method). Since flushing the last (incomplete) record from FECS would only be useful in that case (context switch ongoing), remove flush operation on engine reset. Note that an explicit ENGINE_RESET event (with pid) is inserted in user-facing ctxsw buffer on engine reset. Bug 200228310 Change-Id: I885525f8f197f81266b50db161bb511867fc74f4 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1207305 (cherry picked from commit 44391b6204fd648949295f90481b0c424d9a5ddf) Reviewed-on: http://git-master/r/1208414 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix reported interleave for TSGsThomas Fleury2016-08-29
| | | | | | | | | | | | | | | If a channel is part of a TSG, report TSG's interleave in debugfs for sched parameters. Bug 200228310 Change-Id: I2eeee7aacfa92f9d5fc367225a23a663ca6ac593 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1207304 (cherry picked from commit 1950ae679f112dcf24a7f3c695d4ab098de10326) Reviewed-on: http://git-master/r/1208413 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix ctxsw timeout handling for TSGsThomas Fleury2016-08-29
| | | | | | | | | | | | | | | | | | | | | While collecting failing engine data, id type (is_tsg) was not set for ctxsw and save engine states. This could result in some ctxsw timeout interrupts to be ignored (id reported with wrong is_tsg). For TSGs, check if we made some progress on any of the channels before kicking fifo recovery. Bug 200228310 Jira EVLR-597 Change-Id: I231549ae68317919532de0f87effb78ee9c119c6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1204035 (cherry picked from commit 7221d256fd7e9b418f7789b3d81eede8faa16f0b) Reviewed-on: http://git-master/r/1204037 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: fix channel abortRichard Zhao2016-08-23
| | | | | | | | | | | | | set timeout for channel force reset, to prevent submit immediately. Bug 1778448 JIRA VFND-2097 Change-Id: I729597a68cbdc5ed3f6c878955a10ae7c4659fa4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1203298 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add channel wdt supportRichard Zhao2016-08-18
| | | | | | | | | | | | | | - avoid dump gr registers for vgpu - init wdt lock Bug 1776876 JIRA VFND-2151 Change-Id: I73293e0d23b614129c763cb22b09156a8e1432cc Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1202256 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use force_reset_ch in ch wdt handlerRichard Zhao2016-08-18
| | | | | | | | | | | | | | - let force_reset_ch pass down err code - force_reset_ch callback can cover vgpu too. Bug 1776876 JIRA VFND-2151 Change-Id: I48f7890294c6455247198e0cab5f21f83f61f0e1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1202255 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: get constants of gpc_tpc_count/mask arraysRichard Zhao2016-08-15
| | | | | | | | | | | | It'll cover multi-gpcs. JIRA VFND-2103 Change-Id: Ie82bdaad360294696c5a679d694f6f0e2364ca2e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1194631 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add getting gr constantsRichard Zhao2016-08-15
| | | | | | | | | | | | | | | | | | | | move below attributes to constants: TEGRA_VGPU_ATTRIB_GPC_COUNT TEGRA_VGPU_ATTRIB_MAX_TPC_PER_GPC_COUNT TEGRA_VGPU_ATTRIB_MAX_TPC_COUNT TEGRA_VGPU_ATTRIB_NUM_FBPS TEGRA_VGPU_ATTRIB_FBP_EN_MASK TEGRA_VGPU_ATTRIB_MAX_LTC_PER_FBP TEGRA_VGPU_ATTRIB_MAX_LTS_PER_LTC JIRA VFND-2103 Change-Id: Ic2ac14a0f8a1cf19a996bcef20bef0003d3b9a3b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1194630 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add getting sm version constantsRichard Zhao2016-08-15
| | | | | | | | | | | | | | move below attributes to constants: TEGRA_VGPU_ATTRIB_GPC0_TPC0_SM_ARCH JIRA VFND-2103 Change-Id: I5d6aa8f4a49e65307989ef02d223c3ee31fcdeed Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1190481 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add getting ltc constantsRichard Zhao2016-08-15
| | | | | | | | | | | | | | | | | | | move below attributes to constants: TEGRA_VGPU_ATTRIB_COMPTAG_LINES TEGRA_VGPU_ATTRIB_L2_SIZE TEGRA_VGPU_ATTRIB_CACHELINE_SIZE TEGRA_VGPU_ATTRIB_COMPTAGS_PER_CACHELINE TEGRA_VGPU_ATTRIB_SLICES_PER_LTC TEGRA_VGPU_ATTRIB_LTC_COUNT JIRA VFND-2103 Change-Id: Iecf9717ee553a16ffe8de445be5bfe5a99c3a094 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1190480 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add getting constants golden img size and zcull ctx sizeRichard Zhao2016-08-15
| | | | | | | | | | JIRA VFND-2103 Change-Id: I180324b36a1c6b39300b92a2cff6448d7665679b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1190479 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add cmd to get RM server constantsRichard Zhao2016-08-15
| | | | | | | | | | | | | | | | Moving getting constant attributes into one cmd which will be called only once. This patch adds basic infrastructure and gpu arch info, max_freq and num_channels support. JIRA VFND-2103 Change-Id: I100599b49f29c99966f9e90ea381b1f3c09177a3 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1189832 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add vgpu private data and helper functionsRichard Zhao2016-08-15
| | | | | | | | | | | | | | Move vgpu private data to a dedicated structure and allocate it at probe time. Also add virt_handle helper function which is used everywhere. JIRA VFND-2103 Change-Id: I125911420be72ca9be948125d8357fa85d1d3afd Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1185206 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: fix deferred engine reset sequenceDeepak Nibade2016-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | We currently store fault_id into fifo.deferred_fault_engines and use that in gk20a_fifo_reset_engine() which is incorrect Also, in deferred engine reset path during channel close, we do not check if channel is loaded on engine or not fix this with below - store engine_id bits into fifo.deferred_fault_engines - define new API gk20a_fifo_deferred_reset() to perform deferred engine reset - get all engines on which channel is loaded with gk20a_fifo_engines_on_id() - for each set bit/engine_id in fifo.deferred_fault_engines, check if channel is loaded on that engine, and if yes, reset the engine Bug 1791696 Change-Id: I1b8b1a9e3aa538fe6903a352aa732b47c95ec7d5 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1195087 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Disable rail gating for dGPULakshmanan M2016-08-12
| | | | | | | | | | | | | | Bug 200224907 Change-Id: I515bba1987e54fb5ab78efda85916b6f0d3a4297 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1201233 (cherry picked from commit aa45908e77d451376728860cebe3938e765c4388) Reviewed-on: http://git-master/r/1201654 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Exclude first page from vidmem sizeTerje Bergstrom2016-08-10
| | | | | | | | | | | | | | | | We initialized vidmem allocator with base=4K, and size of 4GB. This caused allocator to allocate addresses between 4K and 4GB+4K, causing a physical MMU fault. Bug 1793810 Change-Id: I554f62aeee4080acd86ef2c8011089ec9b8120df Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1196300 (cherry picked from commit 41a860e21c6da3f8fda58ceb56e78316f6987f53) Reviewed-on: http://git-master/r/1200712 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Set PCI device node permissionsTerje Bergstrom2016-08-10
| | | | | | | | | | | | | Device nodes for PCI devices need to be RW for everybody. Bug 200225622 Change-Id: I14de9d17f76ca45ba525d0c4f5e8d448bbfda98b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1198556 (cherry picked from commit 60bf9118715b61b8cd3f2379479caf280ae4e35c) Reviewed-on: http://git-master/r/1200713 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Free the channel used semaphore indexLakshmanan M2016-08-10
| | | | | | | | | | | | | | | Free the channel used semaphore index during gk20a_free_channel(). Bug 1793819 Change-Id: I4215d05f7f3ba0636e2abb1803011711c8a38301 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1196877 (cherry picked from commit 2c5720de506caac29629f6a1c578e6da80b1a135) Reviewed-on: http://git-master/r/1198883 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: post bpt events after processingDeepak Nibade2016-08-10
| | | | | | | | | | | | | | | | | | We currently post bpt events (bpt.int and bpt.pause) even before we process and clear the interrupts and this could cause races with UMD Fix this by posting bpt events only after we are done processing the interrupts Bug 200209410 Change-Id: Ic3ff7148189fccb796cb6175d6d22ac25a4097fb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1184109 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: initialize local variableDeepak Nibade2016-08-04
| | | | | | | | | | | | | | | | | Initialize character array buf in gk20a_channel_ioctl() to zero Keeping it uninitialized can result in leaking kernel stack info to user space since we pass this buffer to UMD Bug 1793398 Change-Id: Iffd654dbaca3b4e3c8fd2ac270d0febd01c165b8 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1195862 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Remove early exit from mmu faultCory Perry2016-08-01
| | | | | | | | | | | | | | | Disabling / enabling of PFIFO must stay inside the isr. It cannot be held disabled outside the isr -- this causes any kind of preemption mechanism to fail in the presence of an MMU fault until the channel resets the engine. Bug 1791696 Change-Id: I16600a8571f6555262a75deb305c1d67eb29581a Signed-off-by: Cory Perry <cperry@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1191026 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move dbg_session_ops to gopsPeter Daifuku2016-07-30
| | | | | | | | | | | | | | Move dbg_session_ops to gops for better code consistency JIRA VFND-1905 Change-Id: I04a11d77dd8c26d9922e80e556822f80dd2bc36d Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1192641 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add preemption mode support for gp10xLakshmanan M2016-07-29
| | | | | | | | | | | | | | | Added preemption mode (WFI, GFXP, CTA and CILP) support for gp10x family gr class (PASCAL_B and PASCAL_COMPUTE_B). Bug 200221149 Change-Id: I859a4d2db518bca0ffeb0d85a6bb271f6b15db87 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1193207 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: add check for is_fmodelSeema Khowala2016-07-27
| | | | | | | | | | | | | | | | is_fmodel flag will be set in gk20a_probe(). Updated code for is_fmodel check, instead of check for supported simulated platforms. Bug 1735760 Change-Id: I7cbac2196130fe5ce4c1a910504879e6948c13da Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1177869 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Adeel Raza <araza@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User