summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/fifo_gk20a.h
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: improve channel interleave supportAingara Paramakuru2016-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, only "high" priority bare channels were interleaved between all other bare channels and TSGs. This patch decouples priority from interleaving and introduces 3 levels for interleaving a bare channel or TSG: high, medium, and low. The levels define the number of times a channel or TSG will appear on a runlist (see nvgpu.h for details). By default, all bare channels and TSGs are set to interleave level low. Userspace can then request the interleave level to be increased via the CHANNEL_SET_RUNLIST_INTERLEAVE ioctl (TSG-specific ioctl will be added later). As timeslice settings will soon be coming from userspace, the default timeslice for "high" priority channels has been restored. JIRA VFND-1302 Bug 1729664 Change-Id: I178bc1cecda23f5002fec6d791e6dcaedfa05c0c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1014962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: separate API to issue preemptDeepak Nibade2016-02-05
| | | | | | | | | | | | | | Export separate API gk20a_fifo_issue_preempt() to issue preempt request to a channel or TSG Bug 200156699 Change-Id: Ib3b097ef66a6411d75c1fe213cdbe8b1d08d3418 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/935771 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support preprocessing of SM exceptionsDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | Support preprocessing of SM exceptions if API pointer pre_process_sm_exception() is defined Also, expose some common APIs Bug 200156699 Change-Id: I1303642c1c4403c520b62efb6fd83e95eaeb519b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/925883 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add high priority channel interleavePeter Pipkorn2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interleave all high priority channels between all other channels. This reduces the latency for high priority work when there are a lot of lower priority work present, imposing an upper bound on the latency. Change the default high priority timeslice from 5.2ms to 3.0 in the process, to prevent long running high priority apps from hogging the GPU too much. Introduce a new debugfs node to enable/disable high priority channel interleaving. It is currently enabled by default. Adds new runlist length max register, used for allocating suitable sized runlist. Limit the number of interleaved channels to 32. This change reduces the maximum time a lower priority job is running (one timeslice) before we check that high priority jobs are running. Tested with gles2_context_priority (still passes) Basic sanity testing is done with graphics_submit (one app is high priority) Also more functional testing using lots of parallel runs with: NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 –finish plus multiple: NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 -finish Previous to this change, the relative performance between high priority work and normal priority work comes down to timeslice value. This means that when there are many low priority channels, the high priority work will still drop quite a lot. But with this change, the high priority work will roughly get about half the entire GPU time, meaning that after the initial lower performance, it is less likely to get lower in performance due to more apps running on the system. This change makes a large step towards real priority levels. It is not perfect and there are no guarantees on anything, but it is a step forwards without any additional CPU overhead or other complications. It will also serve as a baseline to judge other algorithms against. Support for priorities with TSG is future work. Support for interleave mid + high priority channels, instead of just high, is also future work. Bug 1419900 Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98 Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com> Reviewed-on: http://git-master/r/805961 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: set aggressive_sync_destroy at runtimeDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | We currently set "aggressive_destroy" flag to destroy sync object statically and for each sync object Move this flag to per-platform structure so that it can be set per-platform for all the sync objects Also, set the default value of this flag as "false" and set it to "true" once we have more than 64 channels in use Bug 200141116 Change-Id: I1bc271df4f468a4087a06a27c7289ee0ec3ef29c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/822041 (cherry picked from commit 98741e7e88066648f4f14490c76b61dbff745103) Reviewed-on: http://git-master/r/835800 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement per-channel watchdogDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement per-channel watchdog/timer as per below rules : - start the timer while submitting first job on channel or if no timer is already running - cancel the timer when job completes - re-start the timer if there is any incomplete job left in the channel's queue - trigger appropriate recovery method as part of timeout handling mechanism Handle the timeout as per below : - get timed out channel, and job data - disable activity on all engines - check if fence is really pending - get information on failing engine - if no engine is failing, just abort the channel - if engine is failing, trigger the recovery Also, add flag "ch_wdt_enabled" to enable/disable channel watchdog mechanism. Watchdog can also be disabled using global flag "timeouts_enabled" Set the watchdog time to be 5s using macro NVGPU_CHANNEL_WATCHDOG_DEFAULT_TIMEOUT_MS Bug 200133289 Change-Id: I401cf14dd34a210bc429f31bd5216a361edf1237 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/797072 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: APIs to disable/enable all engines' activityDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | Add below APIs to disable/re-enable activity on all engines gk20a_fifo_disable_all_engine_activity() gk20a_fifo_enable_all_engine_activity() Bug 200133289 Change-Id: Ie01a260d587807a3c1712ee32fe870fbcb08f9cd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/798747 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix runlist update timeout handlingVijayakumar2015-09-16
| | | | | | | | | | | | | | bug 1625901 1) disable ELPG before doing GR reset when runlist update times out 2) add mutex for GR reset to avoid multiple threads resetting GR 3) protect GR reset with FECS mutex so that no one else submits methods Change-Id: I02993fd1eabe6875ab1c58a40a06e6c79fcdeeae Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/793643 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: separate API to get failing engine dataDeepak Nibade2015-09-11
| | | | | | | | | | | | | | | | | In gk20a_fifo_handle_sched_error(), we currently have a sequence to identify failing engine (stuck on context switch) and corresponding failing channel with its type Separate out this sequence in new API gk20a_fifo_get_failing_engine_data() so that it can be reused from else where too Bug 200133289 Change-Id: I3cef395170cf8990c014c7505c798fd6f2e37921 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/797070 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: improve sched err handlingVijayakumar2015-07-17
| | | | | | | | | | | | | | | | | bug 200114561 1) when handling sched error, if CTXSW status reads switch check FECS mailbox register to know whether next or current channel caused error 2) Update recovery function to use ch id passed to it 3) Recovery function now passes mmu_engine_id to mmu fault handler instead of fifo_engine_id Change-Id: I3576cc4a90408b2f76b2c42cce19c27344531b1c Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/763538 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu: nvgpu: Per-chip PBDMA signatureTerje Bergstrom2015-04-04
| | | | | | | | PBDMA HW signature depends on the chip. Change-Id: If57d721d9bb77a090f967930a1aa2037bf4a16fe Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/672922
* gpu: nvgpu: More robust recoveryTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Make recovery a more straightforward process. When we detect a fault, trigger MMU fault, and wait for it to trigger, and complete recovery. Also reset engines before aborting channel to ensure no stray sync point increments can happen. Change-Id: Iac685db6534cb64fe62d9fb452391f43100f2999 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/709060 (cherry picked from commit 95c62ffd9ac30a0d2eb88d033dcc6e6ff25efd6f) Reviewed-on: http://git-master/r/707443
* gpu: nvgpu: Retrieve intr & reset id from HWTerje Bergstrom2015-03-18
| | | | | | | | | | | | Query interrupt number and reset id from HW. Use the number from HW when enabling and detecting interrupts. Bug 200036089 Bug 1567274 Change-Id: If9cb4db79a19dcb193ba7ad9db7081f4fe1ab433 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/600988
* gpu: nvgpu: Use polling to detect runlist switchTerje Bergstrom2015-03-18
| | | | | | | | | | | Runlist event is not sent in gm20b for updated runlist. Polling is the preferred way also for gk20a. Bug 1555239 Change-Id: I60de084db69f848f63451f1f3078f183ca51ba50 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/500241
* gpu: nvgpu: use TSG recover APIDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use TSG specific API gk20a_fifo_recover_tsg() in following cases : - IOCTL_CHANNEL_FORCE_RESET to force reset a channel in TSG, reset all the channels - handle pbdma intr while resetting in case of pbdma intr, if channel is part of TSG, recover entire TSG - TSG preempt failure when TSG preempt times out, use TSG recover API Use preempt_tsg() API to preempt if channel is part of TSG Add below two generic APIs which will take care of preempting/ recovering either of channel or TSG as required gk20a_fifo_preempt() gk20a_fifo_force_reset_ch() Bug 1470692 Change-Id: I8d46e252af79136be85a9a2accf8b51bd924ca8c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/497875 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add API to recover TSGDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - add and export API "gk20a_fifo_recover_tsg()" to recover a TSG - if TSG is running on any engine, then trigger MMU fault on those engines - otherwise, abort each channel in TSG - modify channel specific API engines_on_ch() to generic engines_on_id() which will take an ID and a flag to specify whether ID is for channel or TSG and return engines running on that ID - modify channel specific API get_faulty_channel() to generic get_faulty_id_type() which will take pointers to ID and type of ID (either a regular channel or TSG) - remove runlist update from recover_ch() since no need to touch runlist during recovery - set error notifier first and then only abort the channels for TSG recovery path - also, add necessary accessors to get engine status type as TSG Bug 1470692 Change-Id: I7137f611f80916b3d256d4b0dc6e5cf1e93eef6f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/497873 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add API to preempt TSGDeepak Nibade2015-03-18
| | | | | | | | | | | | | | Add API gk20a_fifo_preempt_tsg() which takes ID of tsg and preempts it Bug 1514064 Bug 1470692 Change-Id: I1d52c1dd7a9aecc1314b0f223fe4eedecc033629 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/495583 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: do not touch runlist during recoveryDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | Currently we clear the runlist and re-create it in scheduled work during fifo recovery process But we can post-pone this runlist re-generation for later time i.e. when channel is closed Hence, remove runlist locks and re-generation from handle_mmu_fault() methods. Instead of that, disable gr fifo access at start of recovery and re-enable it at end of recovery process. Also, delete scheduled work to re-create runlist. Re-enable EPLG and fifo access in finish_mmu_fault_handling() itself. bug 1470692 Change-Id: I705a6a5236734c7207a01d9a9fa9eca22bdbe7eb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/449225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Wait for idle via FIFO registersTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | Wait for engine idle via FIFO's engine status instead of submitting WFI to channel. Submitting WFI and waiting is not robust, and wait might invoke debug dump which cannot be done while powering down. Bug 1499214 Change-Id: I4d52e8558e1a862ad4292036594d81ebfbd5f36b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/432151 Reviewed-by: Sachin Nikam <snikam@nvidia.com> Tested-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: add TSG support to runlistsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | - when a TSG channel is made runnable, add it to TSG's runnable list - when a TSG channel is removed from runlist, remove it from TSG's runnable list When we rewrite the entire runlist : - first add all the channels which are not part of any TSG - then find all active TSGs, add an entry in runlist for the TSG (with TSG id and length of TSG) - then write entries for each channel in that TSG Bug 1470692 Change-Id: Ic55a4d5959abc72cd20b8224eb4c31d3ff411861 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416612 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add kernel APIs for TSG supportDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add support to create/destroy TSGs using node "/dev/nvhost-tsg-gpu" Provide below IOCTLs to bind/unbind channels to/from TSGs : NVGPU_TSG_IOCTL_BIND_CHANNEL NVGPU_TSG_IOCTL_UNBIND_CHANNEL Bug 1470692 Change-Id: Iaf9f16a522379eb943906624548f8d28fc6d4486 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416610
* gpu: nvgpu: Fault engines on PBDMA errorTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | On PBDMA error even though the engine might not be wedged, we need to kick the channel out of engine. Add that logic. Also when channel is not in engine, we need to remove it from runlist. Bug 1498688 Change-Id: I5939feb41d0a90635ba313b265c7e3b5d3f48622 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/417682 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Kevin Huang (Eng-SW) <kevinh@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: Make trigger mmu fault GPU specificTerje Bergstrom2015-03-18
| | | | | | | | | | | Add abstraction for triggering fake MMU fault, and a gk20a implementation. Also adds recovery to FE hardware warning exception to make testing easier. Bug 1495967 Change-Id: I6703cff37900a4c4592023423f9c0b31a8928db2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add NVIDIA GPU DriverArto Merilainen2015-03-18
This patch moves the NVIDIA GPU driver to a new location. Bug 1482562 Change-Id: I24293810b9d0f1504fd9be00135e21dad656ccb6 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/383722 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>