summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/channel_gk20a.h
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: fix TSG abort sequenceDeepak Nibade2016-06-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In gk20a_fifo_abort_tsg(), we loop through channels of TSG and call gk20a_channel_abort() for each channel This is incorrect since we disable and preempt each channel separately, whereas we should disable all channels at once and use TSG specific API to preempt TSG Fix this with below sequence : - gk20a_disable_tsg() to disable all channels - preempt tsg if required - for each channel in TSG - set has_timedout flag - call gk20a_channel_abort_clean_up() to clean up channel state Also, separate out common gk20a_channel_abort_clean_up() API which can be called from both channel and TSG abort routines In gk20a_channel_abort(), call gk20a_fifo_abort_tsg() if the channel is part of TSG Add new argument "preempt" to gk20a_fifo_abort_tsg() and preempt TSG if flag is set Bug 200205041 Change-Id: I4eff5394d26fbb53996f2d30b35140b75450f338 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1157190 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: refactor gk20a_mem_{wr,rd} for vidmemKonsta Holtta2016-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support vidmem, pass g and mem_desc to the buffer memory accessor functions. This allows the functions to select the memory access method based on the buffer aperture instead of using the cpu pointer directly (like until now). The selection and aperture support will be in another patch; this patch only refactors these accessors, but keeps the underlying functionality as-is. gk20a_mem_{rd,wr}32() work as previously; add also gk20a_mem_{rd,wr}() for byte-indexed accesses, gk20a_mem_{rd,wr}_n() for memcpy()-like functionality, and gk20a_memset() for filling buffers with a constant. The 8 and 16 bit accessor functions are removed. vmap()/vunmap() pairs are abstracted to gk20a_mem_{begin,end}() to support other types of mappings or conditions where mapping the buffer is unnecessary or different. Several function arguments that would access these buffers are also changed to take a mem_desc instead of a plain cpu pointer. Some relevant occasions are changed to use the accessor functions instead of cpu pointers without them (e.g., memcpying to and from), but the majority of direct accesses will be adjusted later, when the buffers are moved to support vidmem. JIRA DNVGPU-23 Change-Id: I3dd22e14290c4ab742d42e2dd327ebeb5cd3f25a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1121143 Reviewed-by: Ken Adams <kadams@nvidia.com> Tested-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Allocate channel table with vmallocTerje Bergstrom2016-04-28
| | | | | | | | | | | | | | | | | Channel table can be bigger than one page, so allocate it with vmalloc. Also add a free for tsg table, which did not exist before, and remove per-channel remove_channel callback which was never used. JIRA DNVGPU-50 Change-Id: I3ee84b65d94881df52bf0618bf4c5f2e85758223 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1129244 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Ken Adams <kadams@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: remove submit lockDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | Remove submit lock since we have moved to use more fine-grained locks Remove API check_gp_put() since we cannot call it in submit path due to latencies and we cannot call it in gk20a_channel_clean_up_jobs() anymore since it will fail there without the lock Bug 200187553 Change-Id: I05b9fa95c9009000e13232d8fa567336eeee11c6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120411 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add lock for fencesDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | All pre/post fence accesses in last_submit are currently protected by submit lock In order to remove the submit lock, move all fence accesses under own lock i.e. fence_lock Bug 200187553 Change-Id: I0132d1933dc92db8c5ed8c9311e49a030aa2d38c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120409 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support for hwpm context switchingPeter Daifuku2016-04-07
| | | | | | | | | | | | Add support for hwpm context switching Bug 1648200 Change-Id: I482899bf165cd2ef24bb8617be16df01218e462f Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1120450 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: post BPT_INT/PAUSE and BLOCKING_SYNC eventsDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | Post EVENT_ID_BPT_INT when bpt.int is pending Post EVENT_ID_BPT_PAUSE when bpt.pause is pending Post EVENT_ID_BLOCKING_SYNC whenever there is non-stalling semaphore interrupt indicating work completion from GR/CE2 engine Bug 200089620 Change-Id: I91b7bf48f8585f0d318298fc0c4a66d42055f0a7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1112274 (cherry picked from commit d2b744b1f9acac56435cd7e7ab9a7a845579ef24) Reviewed-on: http://git-master/r/1120321 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: APIs to post event id eventsDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | | Add below channel and TSG APIs to post events on event_id interface gk20a_channel_event_id_post_event() gk20a_tsg_event_id_post_event() Bug 200089620 Change-Id: I0cfadc9ffdb880b2410f97758fad47905c620db1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1112267 (cherry picked from commit 9f50d7da4500af4dbf4dabe7916eda6fc220f4fb) Reviewed-on: http://git-master/r/1120320 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add TSG support to channel event idDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | | | | | | | Add NVGPU_IOCTL_TSG_EVENT_ID_CTRL API for channel event id support to TSGs This API will accept an event_id (like BPT.INT or BPT.PAUSE), a command to enable the event, and return a file descriptor on which we can raise the event (if cmd=enable) Events generated for TSGs will reuse file operations "gk20a_event_id_ops" Bug 200089620 Change-Id: I2f563c6d3a0988eb670caac2d3c7c6795724792c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1030776 (cherry picked from commit 72b61fa266279038f013e582be80c21808e1038d) Reviewed-on: http://git-master/r/1120319 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add channel event id supportDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With NVGPU_IOCTL_CHANNEL_EVENTS_CTRL, nvgpu can raise events to User space. But user space cannot distinguish between various types of events. To overcome this, we need finer-grained API to deliver various events to user space. Remove old API NVGPU_IOCTL_CHANNEL_EVENTS_CTRL, and all the support for this API (we can remove this since User space has not started using this API at all) Add new API NVGPU_IOCTL_CHANNEL_EVENT_ID_CTRL which will accept an event_id (like BPT.INT or BPT.PAUSE), a command to enable the event, and return a file descriptor on which we can raise the event (if cmd=enable) Event is disabled when file descriptor is closed Add file operations "gk20a_event_id_ops" to support polling on event fd Also add API gk20a_channel_get_event_data_from_id() to get event_data of event from its id Bug 200089620 Change-Id: I5288f19f38ff49448c46338c33b2a927c9e02254 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1030775 (cherry picked from commit 5721ce2735950440bedc2b86f851db08ed593275) Reviewed-on: http://git-master/r/1120318 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add support to set channel timesliceAingara Paramakuru2016-03-22
| | | | | | | | | | | | | | | As part of improving GPU scheduling, userspace can now set a channel's timeslice, within reasonable limits imposed by the kernel driver. JIRA VFND-1312 Bug 1729664 Change-Id: I4c3430c43437889b8685f12988d4b967bb7877bb Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1020917 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: improve channel interleave supportAingara Paramakuru2016-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, only "high" priority bare channels were interleaved between all other bare channels and TSGs. This patch decouples priority from interleaving and introduces 3 levels for interleaving a bare channel or TSG: high, medium, and low. The levels define the number of times a channel or TSG will appear on a runlist (see nvgpu.h for details). By default, all bare channels and TSGs are set to interleave level low. Userspace can then request the interleave level to be increased via the CHANNEL_SET_RUNLIST_INTERLEAVE ioctl (TSG-specific ioctl will be added later). As timeslice settings will soon be coming from userspace, the default timeslice for "high" priority channels has been restored. JIRA VFND-1302 Bug 1729664 Change-Id: I178bc1cecda23f5002fec6d791e6dcaedfa05c0c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1014962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move clean up of jobs to separate workerDeepak Nibade2016-02-05
| | | | | | | | | | | | | | | | | | | | | | | | We currently clean up the jobs in gk20a_channel_update() which is called from nvhost worker thread Instead of doing this, schedule another delayed worker thread clean_up_work to clean up the jobs (with delay of 1 jiffies) Keep update_gp_get() in channel_update() and not in delayed worker since this will help in better book keeping of gp_get Also, this scheduling will help delay job clean-up so that more number of jobs are batched for clean up and hence less time is consumed by worker Bug 1718092 Change-Id: If3b94b6aab93c92da4cf0d1c74aaba756f4cd838 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/931701 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add channel_set_priority supportRichard Zhao2016-01-25
| | | | | | | | | | | | | | | - add gops.fifo.channel_set_priority and move current code as native callback. - implement the callback for vgpu Bug 1701079 Change-Id: If1cd13ea4478d11d578da2f682598e0c4522bcaf Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/932829 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: API to post channel eventsDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | Add new API gk20a_channel_post_event() which adds channel event and also calls wake_up() for channel's semaphore wq Bug 200156699 Change-Id: If56f1bf8edcce79c9248809f8476ed853b7d2d9d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/927132 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: APIs to enable/disable TSGDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | | | | | | | | | export below APIs for TSGs : gk20a_enable_tsg() - enable only TSG gk20a_disable_tsg() - disable only TSG gk20a_enable_channel_tsg() - if channel is part of TSG, enable TSG otherwise enable channel gk20a_disable_channel_tsg() - if channel is part of TSG, disable TSG otherwise disable channel Bug 200156699 Change-Id: Icdaca35235c3f323687f839fe32c6c5fe964b230 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/927131 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add high priority channel interleavePeter Pipkorn2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interleave all high priority channels between all other channels. This reduces the latency for high priority work when there are a lot of lower priority work present, imposing an upper bound on the latency. Change the default high priority timeslice from 5.2ms to 3.0 in the process, to prevent long running high priority apps from hogging the GPU too much. Introduce a new debugfs node to enable/disable high priority channel interleaving. It is currently enabled by default. Adds new runlist length max register, used for allocating suitable sized runlist. Limit the number of interleaved channels to 32. This change reduces the maximum time a lower priority job is running (one timeslice) before we check that high priority jobs are running. Tested with gles2_context_priority (still passes) Basic sanity testing is done with graphics_submit (one app is high priority) Also more functional testing using lots of parallel runs with: NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 –finish plus multiple: NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 -finish Previous to this change, the relative performance between high priority work and normal priority work comes down to timeslice value. This means that when there are many low priority channels, the high priority work will still drop quite a lot. But with this change, the high priority work will roughly get about half the entire GPU time, meaning that after the initial lower performance, it is less likely to get lower in performance due to more apps running on the system. This change makes a large step towards real priority levels. It is not perfect and there are no guarantees on anything, but it is a step forwards without any additional CPU overhead or other complications. It will also serve as a baseline to judge other algorithms against. Support for priorities with TSG is future work. Support for interleave mid + high priority channels, instead of just high, is also future work. Bug 1419900 Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98 Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com> Reviewed-on: http://git-master/r/805961 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: enable semaphore acquire timeoutRichard Zhao2016-01-10
| | | | | | | | | | | | | | | | It'll detect dead semaphore acquire. The worst case is when ACQUIRE_SWITCH is disabled, semaphore acquire will poll and consume full gpu timeslicees. The timeout value is set to half of channel WDT. Bug 1636800 Change-Id: Ida6ccc534006a191513edf47e7b82d4b5b758684 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/928827 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: preempt before adjusting fencesDeepak Nibade2015-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Current sequence in gk20a_disable_channel() is - disable channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() - preempt channel But this leads to scenarios where syncpoint has min > max value Hence to fix this, make sequence in gk20a_disable_channel() - disable channel in gk20a_channel_abort() - preempt channel in gk20a_channel_abort() - adjust pending fence in gk20a_channel_abort() If gk20a_channel_abort() is called from other API where preemption is not needed, then use channel_preempt flag and do not preempt channel in those cases Bug 1683059 Change-Id: I4d46d4294cf8597ae5f05f79dfe1b95c4187f2e3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/921290 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Immediate channel releaseTerje Bergstrom2015-12-10
| | | | | | | | | | | | When closing channel, disable and preempt it immediately instead of waiting for it to finish all work. Bug 1683059 Change-Id: Ia5f5fc6a072dc3ddb1e9bf63534814ff0a60b5b4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/836746
* gpu: nvgpu: create sync_fence only if neededDeepak Nibade2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we create sync_fence (from nvhost_sync_create_fence()) for every submit But not all submits request for a sync_fence. Also, nvhost_sync_create_fence() API takes about 1/3rd of the total submit path. Hence to optimize, we can allocate sync_fence only when user explicitly asks for it using (NVGPU_SUBMIT_GPFIFO_FLAGS_FENCE_GET && NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE) Also, in CDE path from gk20a_prepare_compressible_read(), we reuse existing fence stored in "state" and that can result into not returning sync_fence_fd when user asked for it Hence, force allocation of sync_fence when job submission comes from CDE path Bug 200141116 Change-Id: Ia921701bf0e2432d6b8a5e8b7d91160e7f52db1e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/812845 (cherry picked from commit 5fd47015eeed00352cc8473eff969a66c94fee98) Reviewed-on: http://git-master/r/837662 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: IOCTL to disable watchdog per-channelDeepak Nibade2015-11-30
| | | | | | | | | | | | | | | | | | | Add IOCTL NVGPU_IOCTL_CHANNEL_WDT to disable/enable watchdog per-channel Also, if watchdog is disabled, we currently schedule the worker with MAX timeout. Instead of this, do not schedule any worker if watchdog is disabled Bug 1683059 Bug 1700277 Change-Id: I7f6bec84adeedb74e014ed6d1471317b854df84c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/837962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rework private command buffer free pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently allocate private command buffers (wait_cmd and incr_cmd) before submitting the job but we never free them explicitly. When private command queue of the channel is full, we then try to recycle/remove free command buffers. But this recycling happens during submit path, and hence that particular submit path takes much longer Rework this as below : - add reference of command buffers to job structure - when job completes, free the command buffers explicitly - remove the code to recycle buffers since it should not be needed now Note that command buffers need to be freed in order of their allocation. Ensure this with error print before freeing the command buffer entry Bug 200141116 Bug 1698667 Change-Id: Id4b69429d7ad966307e0d122a71ad55076684307 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/827638 (cherry picked from commit c6cefd69b71c9b70d6df5343b13dfcfb3fa99598) Reviewed-on: http://git-master/r/835802 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove temporary gpfifo allocation in submit pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In GPU job submit path gk20a_ioctl_channel_submit_gpfifo(), we currently allocate a temporary gpfifo, copy user space gpfifo content into this temporary buffer, and then copy temp buffer content into channel's gpfifo. Allocation/copy/free of temporary buffer adds additional overhead Rewrite this sequence such that gk20a_submit_channel_gpfifo() can receive either a pre-filled gpfifo or pointer to user provided args. And then we can direclty copy the user provided gpfifo into the channel's gpfifo Also, if command buffer tracing is enabled, we still need to copy user provided gpfifo into temporaty buffer for reading But that should not cause overhead in real world use case Bug 200141116 Change-Id: I7166c9271da2694059da9853ab8839e98457b941 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/823386 (cherry picked from commit 3e0702db006c262dd8737a567b8e06f7ff005e2c) Reviewed-on: http://git-master/r/835799 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: IOCTL to set TSG timesliceDeepak Nibade2015-11-03
| | | | | | | | | | | | | | | | | | | | | | | | | Add new IOCTL NVGPU_IOCTL_TSG_SET_PRIORITY to allow setting timeslice for entire TSG Return error from channel specific IOCTL_CHANNEL_SET_PRIORITY if the channel is part of TSG Separate out API gk20a_channel_get_timescale_from_timeslice() to get timeslice_timeout and scale from timeslice period Use this API to get timeslice_timeout and scale for TSG and store it in tsg_gk20a structure Then trigger runlist update so that new timeslice values will be re-written to runlist for TSG Bug 200146615 Change-Id: I555467d034f81b372b31372f0835d72b1c159508 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/824206 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Get rid of legacy gpfifo typeJanne Hellsten2015-10-27
| | | | | | | | | | | | | | | | | | | | | | | Get rid of the duplicate gpfifo struct to emphasize the fact that nvgpu_gpfifo is the only memory layout for gpfifo entries that works. This is the same layout that HW uses. Also, add a local pointer to the gpfifo memory in gk20a_submit_channel_gpfifo to get rid of repeated typecasts. Bug 1592391 Bug 1550886 Change-Id: I5432859ef8e7c1aab5907e44098994d7bb807f50 Signed-off-by: Janne Hellsten <jhellsten@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/677341 (cherry picked from commit 724c8c6228af81dd440e825bddf545dd6b2b8bd7) Reviewed-on: http://git-master/r/822548 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Protect sync by an own lockTerje Bergstrom2015-10-22
| | | | | | | | | | | Protect creation and deletion of sync by an own mutex. This prevents deadlock in channel abort when abort is called from submit path. Bug 200147887 Change-Id: I5d6308b773c1d1a6a89d4590e2e74c74d691f79d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/821127
* gpu: nvgpu: restart timer instead of cancelDeepak Nibade2015-10-20
| | | | | | | | | | | | | | | | | In gk20a_fifo_handle_sched_error(), we currently cancel the timeout on all the channels But this could cause us to miss one of stuck channel hence, instead of cancelling, restart the timeout of channel on which it is already active Bug 200133289 Change-Id: I40e7e0e5394911fc110ab6fde39592b885dfaf7d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/816133 Reviewed-by: Ishan Mittal <imittal@nvidia.com> Tested-by: Ishan Mittal <imittal@nvidia.com>
* gpu: nvgpu: cancel all wdt timeouts while handling SCHED errorsDeepak Nibade2015-10-07
| | | | | | | | | | | | | | | | | | | A SCHED error might cause multiple channels' watchdogs to trigger simultaneously Hence, to avoid this conflict cancel watchdog timeout on all channels before recovering from SCHED errors Also, define API gk20a_channel_timeout_stop_all_channels() to cancel wdt timeout on all channels Bug 200133289 Change-Id: I8324c397891f0a711327b77d0677cd6718af6d01 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/810959 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: make wdt timeout per-platformDeepak Nibade2015-10-07
| | | | | | | | | | | | | | | | Channel watchdog timeout is set to a costant value of 5s as of now Make this timeout platform specific and set it to 5s for gm20b and 7s for gk20a Bug 200133289 Change-Id: I6e7f0fed93a8d5b197ae46807131311196c6636f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/810956 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-29
| | | | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: I8a8deaf6fb91f4bbceacd491db7eb6f7bca5001b Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/804625 Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement per-channel watchdogDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement per-channel watchdog/timer as per below rules : - start the timer while submitting first job on channel or if no timer is already running - cancel the timer when job completes - re-start the timer if there is any incomplete job left in the channel's queue - trigger appropriate recovery method as part of timeout handling mechanism Handle the timeout as per below : - get timed out channel, and job data - disable activity on all engines - check if fence is really pending - get information on failing engine - if no engine is failing, just abort the channel - if engine is failing, trigger the recovery Also, add flag "ch_wdt_enabled" to enable/disable channel watchdog mechanism. Watchdog can also be disabled using global flag "timeouts_enabled" Set the watchdog time to be 5s using macro NVGPU_CHANNEL_WATCHDOG_DEFAULT_TIMEOUT_MS Bug 200133289 Change-Id: I401cf14dd34a210bc429f31bd5216a361edf1237 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/797072 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: Add CDE bits in FECS header"Terje Bergstrom2015-09-24
| | | | | | | | This reverts commit 882975f7f1b4e050be79b0a047a2daa8b53a9187. Change-Id: I4940fc9f7a837840be1ea8e42d58d603235d88d5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/804616
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-24
| | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: Ic006b2c7035bbeabe1081aeed968a6c6d11f9995 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/802327 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: cyclestats mode E snapshots supportLeonid Moiseichuk2015-06-06
| | | | | | | | | | | | | | | | | | | | | | | That is a kernel supporting code for cyclestats mode E. Cyclestats mode E implemented following Windows-design in user-space and required the following operations to be implemented: - attach a client for shared hardware buffer of device - detach client from shared hardware buffer - flush means copy of available data from hardware buffer to private client buffers according to perfmon IDs assigned for clients - perfmon IDs management for user-space clients - a NVGPU_GPU_FLAGS_SUPPORT_CYCLE_STATS_SNAPSHOT capability added Bug 1573150 Change-Id: I9e09f0fbb2be5a95c47e6d80a2e23fa839b46f9a Signed-off-by: Leonid Moiseichuk <lmoiseichuk@nvidia.com> Reviewed-on: http://git-master/r/740653 (cherry picked from commit 79fe89fd4cea39d8ab9dbef0558cd806ddfda87f) Reviewed-on: http://git-master/r/753274 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu:nvgpu: update channel_setup_ramfc interfaceSeshendra Gadagottu2015-06-02
| | | | | | | | | | | | | Pass flags parameter to channel_setup_ramfc for indicating nvgpu_alloc_gpfifo_args characteristics. Bug 1645628 Change-Id: Ia40b37c5c7b208d459aa84f1b022036dd5e1b599 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/744526 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu: nvgpu: protect channel ioctls with a mutexKonsta Holtta2015-04-04
| | | | | | | | | | | | | | Add a big mutex for protecting the channel during ioctls, in case the userspace uses the same channel from several threads at once. The lock is taken during all operations except CHANNEL_WAIT, which could deadlock. Bug 1603482 Change-Id: Ibed962eadc9f00645abd54413dde9aaee00377ab Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/678871 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add open channel ioctl to ctrl nodeKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | Add the ioctl to open a new gpu channel to also the control node for improved process startup performance, in addition to the current open ioctl in the channel node. The new channel fd creation is refactored to a separate function which is called from both ctrl and channel ioctls. Bug 1604952 Change-Id: I3357ceec694c0e6d7a85807183884324cb725d3a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/679516 Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix/HACK for v3.18Dan Willemsen2015-03-18
| | | | Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* gpu: nvgpu: Generic mem_desc & allocationTerje Bergstrom2015-03-18
| | | | | | | | | | | | Make mem_desc a generic container for buffers. Add functions for allocating and mapping buffers to an address space which store their data in mem_desc. Change-Id: I031643442c6fd41f5e7222fe9b7bfcaf9b784db5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/660908 GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: protect channel update callback accessKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | Protect callback races from spurious gk20a channel updates by testing if the channel update callback still exists when in the scheduled work (instead of only when scheduling the work to the queue), and by canceling the work when the channel is freed. Protect access to the callback and its data by accessing them together inside spinlock-protected regions. Bug 200051384 Change-Id: Ib4e1571c35f662195e1dec1e362df32ddc099eb3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/592026 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: kernel support for suspending/resuming SMssujeet baranwal2015-03-18
| | | | | | | | | | | | | | | Kernel support for allowing a GPU debugger to suspend and resume SMs. Invocation of "suspend" on a given channel will suspend all SMs if the channel is resident, else remove the channel form the runlist. Similarly, "resume" will either resume all SMs if the channel was resident, or re-enable the channel in the runlist. Change-Id: I3b4ae21dc1b91c1059c828ec6db8125f8a0ce194 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Signed-off-by: Mayank Kaushik <mkaushik@nvidia.com> Reviewed-on: http://git-master/r/552115 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add update callback to gk20a channelKonsta Holtta2015-03-18
| | | | | | | | | | | | Add support for a callback function with user data pointer to be scheduled from the end of gk20a_channel_update. The function and its private data are supplied when opening a new channel. Change-Id: Ib6b408855ea60d46a6a114a69c01904703019572 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/552014 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Tested-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: create new nvgpu ioctl headerKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | Move nvgpu ioctls from the many user space interface headers to a new single nvgpu.h header under include/uapi. No new code or replaced names are introduced; this change only moves the definitions and changes include directives accordingly. Bug 1434573 Change-Id: I4d02415148e437a4e3edad221e08785fac377e91 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/542651 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rename gpu ioctls and structs to nvgpuKonsta Holtta2015-03-18
| | | | | | | | | | | | | | To help remove the nvhost dependency from nvgpu, rename ioctl defines and structures used by nvgpu such that nvhost is replaced by nvgpu. Duplicate some structures as needed. Update header guards and such accordingly. Change-Id: Ifc3a867713072bae70256502735583ab38381877 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/542620 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement poll() for semaphoresKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | | Add poll interface and control ioctls for waiting for GPU job completion via semaphores. Poll on a gk20a channel file waits for events from pending semaphore interrupts (stalling) of that channel. New ioctls enable and disable the events, and clear a single interrupt event so that next poll doesn't wake up for it again. Bug 1528781 Change-Id: I5c6238966b5d0900c8ab263c6a7f8f2611901f33 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/497750 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support gk20a virtualizationAingara Paramakuru2015-03-18
| | | | | | | | | | | | | The nvgpu driver now supports using the Tegra graphics virtualization interfaces to support gk20a in a virtualized environment. Bug 1509608 Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/440122 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>