summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/channel_gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: APIs to enable/disable TSGDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | | | | | | | | | export below APIs for TSGs : gk20a_enable_tsg() - enable only TSG gk20a_disable_tsg() - disable only TSG gk20a_enable_channel_tsg() - if channel is part of TSG, enable TSG otherwise enable channel gk20a_disable_channel_tsg() - if channel is part of TSG, disable TSG otherwise disable channel Bug 200156699 Change-Id: Icdaca35235c3f323687f839fe32c6c5fe964b230 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/927131 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add high priority channel interleavePeter Pipkorn2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interleave all high priority channels between all other channels. This reduces the latency for high priority work when there are a lot of lower priority work present, imposing an upper bound on the latency. Change the default high priority timeslice from 5.2ms to 3.0 in the process, to prevent long running high priority apps from hogging the GPU too much. Introduce a new debugfs node to enable/disable high priority channel interleaving. It is currently enabled by default. Adds new runlist length max register, used for allocating suitable sized runlist. Limit the number of interleaved channels to 32. This change reduces the maximum time a lower priority job is running (one timeslice) before we check that high priority jobs are running. Tested with gles2_context_priority (still passes) Basic sanity testing is done with graphics_submit (one app is high priority) Also more functional testing using lots of parallel runs with: NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 –finish plus multiple: NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 -finish Previous to this change, the relative performance between high priority work and normal priority work comes down to timeslice value. This means that when there are many low priority channels, the high priority work will still drop quite a lot. But with this change, the high priority work will roughly get about half the entire GPU time, meaning that after the initial lower performance, it is less likely to get lower in performance due to more apps running on the system. This change makes a large step towards real priority levels. It is not perfect and there are no guarantees on anything, but it is a step forwards without any additional CPU overhead or other complications. It will also serve as a baseline to judge other algorithms against. Support for priorities with TSG is future work. Support for interleave mid + high priority channels, instead of just high, is also future work. Bug 1419900 Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98 Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com> Reviewed-on: http://git-master/r/805961 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: enable semaphore acquire timeoutRichard Zhao2016-01-10
| | | | | | | | | | | | | | | | It'll detect dead semaphore acquire. The worst case is when ACQUIRE_SWITCH is disabled, semaphore acquire will poll and consume full gpu timeslicees. The timeout value is set to half of channel WDT. Bug 1636800 Change-Id: Ida6ccc534006a191513edf47e7b82d4b5b758684 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/928827 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: preempt before adjusting fencesDeepak Nibade2015-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Current sequence in gk20a_disable_channel() is - disable channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() - preempt channel But this leads to scenarios where syncpoint has min > max value Hence to fix this, make sequence in gk20a_disable_channel() - disable channel in gk20a_channel_abort() - preempt channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() If gk20a_channel_abort() is called from other API where preemption is not needed, then use channel_preempt flag and do not preempt channel in those cases Bug 1683059 Change-Id: I4d46d4294cf8597ae5f05f79dfe1b95c4187f2e3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/921290 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Immediate channel releaseTerje Bergstrom2015-12-10
| | | | | | | | | | | | When closing channel, disable and preempt it immediately instead of waiting for it to finish all work. Bug 1683059 Change-Id: Ia5f5fc6a072dc3ddb1e9bf63534814ff0a60b5b4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/836746
* gpu: nvgpu: create sync_fence only if neededDeepak Nibade2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we create sync_fence (from nvhost_sync_create_fence()) for every submit But not all submits request for a sync_fence. Also, nvhost_sync_create_fence() API takes about 1/3rd of the total submit path. Hence to optimize, we can allocate sync_fence only when user explicitly asks for it using (NVGPU_SUBMIT_GPFIFO_FLAGS_FENCE_GET && NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE) Also, in CDE path from gk20a_prepare_compressible_read(), we reuse existing fence stored in "state" and that can result into not returning sync_fence_fd when user asked for it Hence, force allocation of sync_fence when job submission comes from CDE path Bug 200141116 Change-Id: Ia921701bf0e2432d6b8a5e8b7d91160e7f52db1e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/812845 (cherry picked from commit 5fd47015eeed00352cc8473eff969a66c94fee98) Reviewed-on: http://git-master/r/837662 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: IOCTL to disable watchdog per-channelDeepak Nibade2015-11-30
| | | | | | | | | | | | | | | | | | | Add IOCTL NVGPU_IOCTL_CHANNEL_WDT to disable/enable watchdog per-channel Also, if watchdog is disabled, we currently schedule the worker with MAX timeout. Instead of this, do not schedule any worker if watchdog is disabled Bug 1683059 Bug 1700277 Change-Id: I7f6bec84adeedb74e014ed6d1471317b854df84c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/837962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rework private command buffer free pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently allocate private command buffers (wait_cmd and incr_cmd) before submitting the job but we never free them explicitly. When private command queue of the channel is full, we then try to recycle/remove free command buffers. But this recycling happens during submit path, and hence that particular submit path takes much longer Rework this as below : - add reference of command buffers to job structure - when job completes, free the command buffers explicitly - remove the code to recycle buffers since it should not be needed now Note that command buffers need to be freed in order of their allocation. Ensure this with error print before freeing the command buffer entry Bug 200141116 Bug 1698667 Change-Id: Id4b69429d7ad966307e0d122a71ad55076684307 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/827638 (cherry picked from commit c6cefd69b71c9b70d6df5343b13dfcfb3fa99598) Reviewed-on: http://git-master/r/835802 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove temporary gpfifo allocation in submit pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In GPU job submit path gk20a_ioctl_channel_submit_gpfifo(), we currently allocate a temporary gpfifo, copy user space gpfifo content into this temporary buffer, and then copy temp buffer content into channel's gpfifo. Allocation/copy/free of temporary buffer adds additional overhead Rewrite this sequence such that gk20a_submit_channel_gpfifo() can receive either a pre-filled gpfifo or pointer to user provided args. And then we can direclty copy the user provided gpfifo into the channel's gpfifo Also, if command buffer tracing is enabled, we still need to copy user provided gpfifo into temporaty buffer for reading But that should not cause overhead in real world use case Bug 200141116 Change-Id: I7166c9271da2694059da9853ab8839e98457b941 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/823386 (cherry picked from commit 3e0702db006c262dd8737a567b8e06f7ff005e2c) Reviewed-on: http://git-master/r/835799 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: IOCTL to set TSG timesliceDeepak Nibade2015-11-03
| | | | | | | | | | | | | | | | | | | | | | | | | Add new IOCTL NVGPU_IOCTL_TSG_SET_PRIORITY to allow setting timeslice for entire TSG Return error from channel specific IOCTL_CHANNEL_SET_PRIORITY if the channel is part of TSG Separate out API gk20a_channel_get_timescale_from_timeslice() to get timeslice_timeout and scale from timeslice period Use this API to get timeslice_timeout and scale for TSG and store it in tsg_gk20a structure Then trigger runlist update so that new timeslice values will be re-written to runlist for TSG Bug 200146615 Change-Id: I555467d034f81b372b31372f0835d72b1c159508 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/824206 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Get rid of legacy gpfifo typeJanne Hellsten2015-10-27
| | | | | | | | | | | | | | | | | | | | | | | Get rid of the duplicate gpfifo struct to emphasize the fact that nvgpu_gpfifo is the only memory layout for gpfifo entries that works. This is the same layout that HW uses. Also, add a local pointer to the gpfifo memory in gk20a_submit_channel_gpfifo to get rid of repeated typecasts. Bug 1592391 Bug 1550886 Change-Id: I5432859ef8e7c1aab5907e44098994d7bb807f50 Signed-off-by: Janne Hellsten <jhellsten@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/677341 (cherry picked from commit 724c8c6228af81dd440e825bddf545dd6b2b8bd7) Reviewed-on: http://git-master/r/822548 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Protect sync by an own lockTerje Bergstrom2015-10-22
| | | | | | | | | | | Protect creation and deletion of sync by an own mutex. This prevents deadlock in channel abort when abort is called from submit path. Bug 200147887 Change-Id: I5d6308b773c1d1a6a89d4590e2e74c74d691f79d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/821127
* gpu: nvgpu: restart timer instead of cancelDeepak Nibade2015-10-20
| | | | | | | | | | | | | | | | | In gk20a_fifo_handle_sched_error(), we currently cancel the timeout on all the channels But this could cause us to miss one of stuck channel hence, instead of cancelling, restart the timeout of channel on which it is already active Bug 200133289 Change-Id: I40e7e0e5394911fc110ab6fde39592b885dfaf7d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/816133 Reviewed-by: Ishan Mittal <imittal@nvidia.com> Tested-by: Ishan Mittal <imittal@nvidia.com>
* gpu: nvgpu: cancel all wdt timeouts while handling SCHED errorsDeepak Nibade2015-10-07
| | | | | | | | | | | | | | | | | | | A SCHED error might cause multiple channels' watchdogs to trigger simultaneously Hence, to avoid this conflict cancel watchdog timeout on all channels before recovering from SCHED errors Also, define API gk20a_channel_timeout_stop_all_channels() to cancel wdt timeout on all channels Bug 200133289 Change-Id: I8324c397891f0a711327b77d0677cd6718af6d01 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/810959 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: make wdt timeout per-platformDeepak Nibade2015-10-07
| | | | | | | | | | | | | | | | Channel watchdog timeout is set to a costant value of 5s as of now Make this timeout platform specific and set it to 5s for gm20b and 7s for gk20a Bug 200133289 Change-Id: I6e7f0fed93a8d5b197ae46807131311196c6636f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/810956 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-29
| | | | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: I8a8deaf6fb91f4bbceacd491db7eb6f7bca5001b Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/804625 Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement per-channel watchdogDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement per-channel watchdog/timer as per below rules : - start the timer while submitting first job on channel or if no timer is already running - cancel the timer when job completes - re-start the timer if there is any incomplete job left in the channel's queue - trigger appropriate recovery method as part of timeout handling mechanism Handle the timeout as per below : - get timed out channel, and job data - disable activity on all engines - check if fence is really pending - get information on failing engine - if no engine is failing, just abort the channel - if engine is failing, trigger the recovery Also, add flag "ch_wdt_enabled" to enable/disable channel watchdog mechanism. Watchdog can also be disabled using global flag "timeouts_enabled" Set the watchdog time to be 5s using macro NVGPU_CHANNEL_WATCHDOG_DEFAULT_TIMEOUT_MS Bug 200133289 Change-Id: I401cf14dd34a210bc429f31bd5216a361edf1237 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/797072 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: Add CDE bits in FECS header"Terje Bergstrom2015-09-24
| | | | | | | | This reverts commit 882975f7f1b4e050be79b0a047a2daa8b53a9187. Change-Id: I4940fc9f7a837840be1ea8e42d58d603235d88d5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/804616
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-24
| | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: Ic006b2c7035bbeabe1081aeed968a6c6d11f9995 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/802327 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: cyclestats mode E snapshots supportLeonid Moiseichuk2015-06-06
| | | | | | | | | | | | | | | | | | | | | | | That is a kernel supporting code for cyclestats mode E. Cyclestats mode E implemented following Windows-design in user-space and required the following operations to be implemented: - attach a client for shared hardware buffer of device - detach client from shared hardware buffer - flush means copy of available data from hardware buffer to private client buffers according to perfmon IDs assigned for clients - perfmon IDs management for user-space clients - a NVGPU_GPU_FLAGS_SUPPORT_CYCLE_STATS_SNAPSHOT capability added Bug 1573150 Change-Id: I9e09f0fbb2be5a95c47e6d80a2e23fa839b46f9a Signed-off-by: Leonid Moiseichuk <lmoiseichuk@nvidia.com> Reviewed-on: http://git-master/r/740653 (cherry picked from commit 79fe89fd4cea39d8ab9dbef0558cd806ddfda87f) Reviewed-on: http://git-master/r/753274 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu:nvgpu: update channel_setup_ramfc interfaceSeshendra Gadagottu2015-06-02
| | | | | | | | | | | | | Pass flags parameter to channel_setup_ramfc for indicating nvgpu_alloc_gpfifo_args characteristics. Bug 1645628 Change-Id: Ia40b37c5c7b208d459aa84f1b022036dd5e1b599 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/744526 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu: nvgpu: protect channel ioctls with a mutexKonsta Holtta2015-04-04
| | | | | | | | | | | | | | Add a big mutex for protecting the channel during ioctls, in case the userspace uses the same channel from several threads at once. The lock is taken during all operations except CHANNEL_WAIT, which could deadlock. Bug 1603482 Change-Id: Ibed962eadc9f00645abd54413dde9aaee00377ab Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/678871 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add open channel ioctl to ctrl nodeKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | Add the ioctl to open a new gpu channel to also the control node for improved process startup performance, in addition to the current open ioctl in the channel node. The new channel fd creation is refactored to a separate function which is called from both ctrl and channel ioctls. Bug 1604952 Change-Id: I3357ceec694c0e6d7a85807183884324cb725d3a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/679516 Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix/HACK for v3.18Dan Willemsen2015-03-18
| | | | Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* gpu: nvgpu: Generic mem_desc & allocationTerje Bergstrom2015-03-18
| | | | | | | | | | | | Make mem_desc a generic container for buffers. Add functions for allocating and mapping buffers to an address space which store their data in mem_desc. Change-Id: I031643442c6fd41f5e7222fe9b7bfcaf9b784db5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/660908 GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: protect channel update callback accessKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | Protect callback races from spurious gk20a channel updates by testing if the channel update callback still exists when in the scheduled work (instead of only when scheduling the work to the queue), and by canceling the work when the channel is freed. Protect access to the callback and its data by accessing them together inside spinlock-protected regions. Bug 200051384 Change-Id: Ib4e1571c35f662195e1dec1e362df32ddc099eb3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/592026 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: kernel support for suspending/resuming SMssujeet baranwal2015-03-18
| | | | | | | | | | | | | | | Kernel support for allowing a GPU debugger to suspend and resume SMs. Invocation of "suspend" on a given channel will suspend all SMs if the channel is resident, else remove the channel form the runlist. Similarly, "resume" will either resume all SMs if the channel was resident, or re-enable the channel in the runlist. Change-Id: I3b4ae21dc1b91c1059c828ec6db8125f8a0ce194 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Signed-off-by: Mayank Kaushik <mkaushik@nvidia.com> Reviewed-on: http://git-master/r/552115 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add update callback to gk20a channelKonsta Holtta2015-03-18
| | | | | | | | | | | | Add support for a callback function with user data pointer to be scheduled from the end of gk20a_channel_update. The function and its private data are supplied when opening a new channel. Change-Id: Ib6b408855ea60d46a6a114a69c01904703019572 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/552014 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Tested-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: create new nvgpu ioctl headerKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | Move nvgpu ioctls from the many user space interface headers to a new single nvgpu.h header under include/uapi. No new code or replaced names are introduced; this change only moves the definitions and changes include directives accordingly. Bug 1434573 Change-Id: I4d02415148e437a4e3edad221e08785fac377e91 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/542651 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rename gpu ioctls and structs to nvgpuKonsta Holtta2015-03-18
| | | | | | | | | | | | | | To help remove the nvhost dependency from nvgpu, rename ioctl defines and structures used by nvgpu such that nvhost is replaced by nvgpu. Duplicate some structures as needed. Update header guards and such accordingly. Change-Id: Ifc3a867713072bae70256502735583ab38381877 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/542620 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement poll() for semaphoresKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | | Add poll interface and control ioctls for waiting for GPU job completion via semaphores. Poll on a gk20a channel file waits for events from pending semaphore interrupts (stalling) of that channel. New ioctls enable and disable the events, and clear a single interrupt event so that next poll doesn't wake up for it again. Bug 1528781 Change-Id: I5c6238966b5d0900c8ab263c6a7f8f2611901f33 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/497750 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support gk20a virtualizationAingara Paramakuru2015-03-18
| | | | | | | | | | | | | The nvgpu driver now supports using the Tegra graphics virtualization interfaces to support gk20a in a virtualized environment. Bug 1509608 Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/440122 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a_fence typeLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | When moving compression state tracking and compbit management ops to kernel, we need to attach a fence to dma-buf metadata, along with the compbit state. To make in-kernel fence management easier, introduce a new gk20a_fence abstraction. A gk20a_fence may be backed by a semaphore or a syncpoint (id, value) pair. If the kernel is configured with CONFIG_SYNC, it will also contain a sync_fence. The gk20a_fence can easily be converted back to a syncpoint (id, value) parir or sync FD when we need to return it to user space. Change gk20a_submit_channel_gpfifo to return a gk20a_fence instead of nvhost_fence. This is to facilitate work submission initiated from kernel. Bug 1509620 Change-Id: I6154764a279dba83f5e91ba9e0cb5e227ca08e1b Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/439846 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: Allow in-kernel channel allocArto Merilainen2015-03-18
| | | | | | | | | | | | | | | This patch modifies channel interfaces to allow allocating the channel for kernel use. This is needed if we want to run a shader from kernel space. Bug 1409151 Change-Id: I3544186bb1541120f85e01a19de106ef011c1b11 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* Revert "gpu: nvgpu: Dump offending push buffer fragment"Arto Merilainen2015-03-18
| | | | | | | | | | | | | | | | | | | Channel and gpfifo allocations are entirely separated from each other, however, the code here assumes that active channel means that the channel also has a gpfifo. This reverts commit a24602f094380539788696d1b1567a4f4d914b17 which added gpfifo dump. Changing debug dumping to be safe requires refactoring the channel release code to use proper locking. Bug 1530226 Change-Id: I2fb02542a17dd56a0a9ce732b327e34b85ade8b9 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/434038 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add TSG support for engine contextDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All channels in a TSG need to share same engine context i.e. pointer in RAMFC of all channels in a TSG must point to same NV_RAMIN_GR_WFI_TARGET To get this, add a pointer to gr_ctx inside TSG struct so that TSG can maintain its own unique gr_ctx Also, change the type of gr_ctx in a channel to pointer variable so that if channel is part of TSG it can point to TSG's gr_ctx otherwise it will point to its own gr_ctx In gk20a_alloc_obj_ctx(), allocate gr_ctx as below : 1) If channel is not part of any TSG - allocate its own gr_ctx buffer if it is already not allocated 2) If channel is part of TSG - Check if TSG has already allocated gr_ctx (as part of TSG) - If yes, channel's gr_ctx will point to that of TSG's - If not, then it means channels is first to be bounded to this TSG - And in this case we will allocate new gr_ctx on TSG first and then make channel's gr_ctx to point to this gr_ctx Also, gr_ctx will be released as below ; 1) If channels is not part of TSG, then it will be released when channels is closed 2) Otherwise, it will be released when TSG itself is closed Bug 1470692 Change-Id: Id347217d5b462e0e972cd3d79d17795b37034a50 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/417065 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add kernel APIs for TSG supportDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add support to create/destroy TSGs using node "/dev/nvhost-tsg-gpu" Provide below IOCTLs to bind/unbind channels to/from TSGs : NVGPU_TSG_IOCTL_BIND_CHANNEL NVGPU_TSG_IOCTL_UNBIND_CHANNEL Bug 1470692 Change-Id: Iaf9f16a522379eb943906624548f8d28fc6d4486 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416610
* gpu: nvgpu: Dump offending push buffer fragmentTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | When outputting debug dump, print the contents of current push buffer segment. Also changes the debug dump to use pr_cont when applicable, and dumps state before recovering in case channel was not loaded to an engine. Bug 1498688 Change-Id: I5ca12f64bae8f12333d82350278c700645d5007e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/422198
* gpu: nvgpu: Support semaphore sync when aborting jobsLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | When aborting jobs on channel error situations, we manually set the channel syncpoint's min == max in gk20a_disable_channel_no_update. Nvhost will notice this manual syncpoint increment, and will call back to gk20a_channel_update, which will clean up the job. With semaphore synchronization, we don't have anybody calling back to gk20a_channel_update, so we need to call it ourselves. Release job semaphores (the equivalent of set_min_eq_max) on gk20a_disable_channel_no_update, and if any semaphores were released, call gk20a_channel_update afterwards. Because we are actually calling gk20a_channel_update in some situations, gk20a_disable_channel_no_update is no longer an appropriate name for the function. Rename it to gk20a_channel_abort. Bug 1450122 Change-Id: I1267b099a5778041cbc8e91b7184844812145b93 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422161 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add semaphore based gk20a_channel_syncLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add semaphore implementation of the gk20a_channel_sync interface. Each channel has one semaphore pool, which is mapped as read-write to the channel vm. We allocate one or two semaphores from the pool for each submit. The first semaphore is only needed if we need to wait for an opaque sync fd. In that case, we allocate the semaphore, and ask GPU to wait for it's value to become 1 (semaphore acquire method). We also queue a kernel work that waits on the fence fd, and subsequently releases the semaphore (sets its value to 1) so that the command buffer can proceed. The second semaphore is used on every submit, and is used for work completion tracking. The GPU sets its value to 1 when the command buffer has been processed. The channel jobs need to hold references to both semaphores so that their backing semaphore pool slots are not reused while the job is in flight. Therefore gk20a_channel_fence will keep a reference to the semaphore that it represents (channel fences are stored in the job structure). This means that we must diligently close and dup the gk20a_channel_fence objects to avoid leaking semaphores. Bug 1450122 Bug 1445450 Change-Id: Ib61091a1b7632fa36efe0289011040ef7c4ae8f8 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/374844 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: export wait_channel_idle()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | - Export gk20a_wait_channel_idle() function from channel_gk20a.h - also, return error -EBUSY from this function when channel is found to be not idle Bug 1487804 Change-Id: Ia7425e9b1332260ee9a53dca55ab07541f2755a9 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/412059 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: add submit_lockDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add submit mutex lock to avoid race conditions between submitting a job, removing a job and submitting WFI With this lock make below operations atomic : during submit_gpfifo() - 1. getting new syncpt 2. inserting syncpt increment 3. submitting gpfifo 4. setting job completion interrupt during submit_wfi() - 1. getting new syncpt 2. inserting syncpt increment when idle during channel_update() - 1. checking the submit job completion 2. freeing the job if it is completed Bug 1305024 Change-Id: I0e3c0b8906d83fd59642344626ffdf24fad2aaab Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/397670 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Make trigger mmu fault GPU specificTerje Bergstrom2015-03-18
| | | | | | | | | | | Add abstraction for triggering fake MMU fault, and a gk20a implementation. Also adds recovery to FE hardware warning exception to make testing easier. Bug 1495967 Change-Id: I6703cff37900a4c4592023423f9c0b31a8928db2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add NVIDIA GPU DriverArto Merilainen2015-03-18
This patch moves the NVIDIA GPU driver to a new location. Bug 1482562 Change-Id: I24293810b9d0f1504fd9be00135e21dad656ccb6 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/383722 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>