summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/fifo_gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: rename hw_chid to chidRichard Zhao2017-06-30
| | | | | | | | | | | | | hw_chid is a relative id for vgpu. For native it's same as hw id. Renaming it to chid to avoid confusing. Jira VFND-3796 Change-Id: I1c7924da1757330ace715a7c52ac61ec9dc7065c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master/r/1509530 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fifo ops for handling pbdma_intr_1Seema Khowala2017-06-27
| | | | | | | | | | | | This is needed to handle new pbmda intr_1 in t19x JIRA GPUT19X-47 Change-Id: If75de0b57f3f18420aff07ee99feaad67ac63752 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master/r/1329373 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hold power ref for deterministic channelsKonsta Holtta2017-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support deterministic channels even with platforms where railgating is supported, have each deterministic-marked channel hold a power reference during their lifetime, and skip taking power refs for jobs in submit path for those. Previously, railgating blocked deterministic submits in general because of gk20a_busy()/gk20a_idle() calls in submit path possibly taking time and more significantly because the gpu may need turning on which takes a nondeterministic and long amount of time. As an exception, gk20a_do_idle() can still block deterministic submits until gk20a_do_unidle() is called. Add a rwsem to guard this. VPR resize needs do_idle, which conflicts with deterministic channels' requirement to keep the GPU on. This is documented in the ioctl header now. Make NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_NO_JOBTRACKING always set in the gpu characteristics now that it's supported. The only thing left now blocking NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_FULL is the sync framework. Make the channel debug dump show which channels are deterministic. Bug 200291300 Jira NVGPU-70 Change-Id: I47b6f3a8517cd6e4255f6ca2855e3dd912e4f5f3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1483038 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move debugfs code to linux moduleDeepak Nibade2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since all debugfs code is Linux specific, remove it from common code and move it to Linux module Debugfs code is now divided into below module specific files : common/linux/debug.c common/linux/debug_cde.c common/linux/debug_ce.c common/linux/debug_fifo.c common/linux/debug_gr.c common/linux/debug_mm.c common/linux/debug_allocator.c common/linux/debug_kmem.c common/linux/debug_pmu.c common/linux/debug_sched.c Add corresponding header files for above modules too And compile all of above files only if CONFIG_DEBUG_FS is set Some more details of the changes made - Move and rename gk20a/debug_gk20a.c to common/linux/debug.c - Move and rename gk20a/debug_gk20a.h to include/nvgpu/debug.h - Remove gm20b/debug_gm20b.c and gm20b/debug_gm20b.h and call gk20a_init_debug_ops() directly from gm20b_init_hal() - Update all debug APIs to receive struct gk20a as parameter instead of receiving struct device pointer - Update API gk20a_dmabuf_get_state() to receive struct gk20a pointer instead of struct device - Include <nvgpu/debug.h> explicitly in all files where debug operations are used - Remove "gk20a/platform_gk20a.h" include from HAL files which no longer need this include - Add new API gk20a_debug_deinit() to deinitialize debugfs and call it from gk20a_remove() - Move API gk20a_debug_dump_all_channel_status_ramfc() to gk20a/fifo_gk20a.c Jira NVGPU-62 Change-Id: I076975d3d7f669bdbe9212fa33d98529377feeb6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1488902 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add fifo ops get_mmu_fault_infoSeema Khowala2017-05-30
| | | | | | | | | | | | | | | | | | | | This is needed to take care of gp10b h/w header changes. gp10b changes as compared to legacy gpu chips -fault_info_fault_type field width is changed -fault_info_write field is removed -fault_info_access_type field is added -fault_info_engine_subid is removed -fault_info_client_type is added -fault_info_client field width has changed JIRA GPUT19X-7 JIRA GPUT19X-12 Change-Id: Iebf28cc6c851830524049b67a71cd72fb4a28948 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1487319 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use mmu_fault_info struct for legacy gpu chipsSeema Khowala2017-05-30
| | | | | | | | | | | | | | | Removed fifo_mmu_fault_info_gk20a struct to use new mmu_fault_info struct JIRA GPUT19X-7 JIRA GPUT19X-12 Change-Id: I1987ff1b07e7dbdbee58d7e5f585faacf4846e54 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1487240 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add chip specific sync point supportseshendra Gadagottu2017-05-26
| | | | | | | | | | | | | | Added support for chip specific sync point implementation. Relevant fifo hal functions are added and updated for legacy chips. JIRA GPUT19X-2 Change-Id: I9a9c36d71e15c384b5e5af460cd52012f94e0b04 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1258232 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Track also pushbuf get for watchdogKonsta Holtta2017-05-24
| | | | | | | | | | | | | | | Make the watchdog notice also fine-grained changes within a single pushbuffer - by tracking just the gpfifo get, the watchdog could wake when the channel hasn't really been stuck but processing a relatively large or slow pushbuf. Jira NVGPU-72 Change-Id: I15374eea5d9abc9d3725a79d0b960503237e478c Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1485919 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add ioctls to get current timesliceThomas Fleury2017-05-24
| | | | | | | | | | | | | | | | | | Add the following ioctls - NVGPU_CHANNEL_IOCTL_GET_TIMESLICE for channel timeslice in us - NVGPU_TSG_IOCTL_GET_TIMESLICE for TSG timeslice in us If timeslice has not been set explicitly, ioctl returns the default timeslice that will be used when programming the runlist entry. Bug 1883271 Change-Id: Ib18fdd836323b1a2d4efceb1e27d07713bd6fca5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1469040 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add wrapper nvgpu/kref.hDeepak Nibade2017-04-17
| | | | | | | | | | | | | Add wrapper header file nvgpu/kref.h. It #includes <linux/kref.h> in Linux. JIRA NVGPU-13 Change-Id: Ib8b002268b1960646986551ecb9f286e1e21e7f6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1463770 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add fifo ops for handling pbdma intr_0Seema Khowala2017-04-12
| | | | | | | | | | | | | This is needed to handle bit 20 (clear_faulted_error) and bit 24 (eng_reset) of t19x pbdma_intr_0 interrupt. JIRA GPUT19X-47 Change-Id: I07c603eff96344c0104579e339e5cf7f675128ef Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1306556 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fifo ops for handling sched error and ctxsw timeoutSeema Khowala2017-04-12
| | | | | | | | | | | | | | For t19x, ctxsw timeout is not handled as part of fifo sched error interrupt. A new fifo interrupt, ctxsw_timeout is added. Bug 1856152 JIRA GPUT19X-74 Change-Id: I5a2ed15d967e5b14fbbb51b074080f1562bca84c Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1317599 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename mem_desc to nvgpu_memAlex Waterman2017-04-06
| | | | | | | | | | | | | | | | | Renaming was done with the following command: $ find -type f | \ xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g' Also rename mem_desc.[ch] to nvgpu_mem.[ch]. JIRA NVGPU-12 Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add teardown_ch_tsg fifo opsSeema Khowala2017-04-04
| | | | | | | | | | | | | | | teardown_ch_tsg fifo ops added as t19x s/w recovery procedure is different than legacy chips. JIRA GPUT19X-7 Change-Id: I5b88f2c1a19d309e5c97c588ddf9689163a75fea Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1327932 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use new List APIs to free channelsDeepak Nibade2017-03-31
| | | | | | | | | | | | | | | | | Use new APIs from <nvgpu/list.h> to access free channel list Define channel_gk20a_from_free_chs() to convert a list node to struct channel_gk20a Jira NVGPU-13 Change-Id: Idaf58f04be1c7fc553bea7c8de45951bf82bb340 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1303025 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move programming of host registers to fifoTerje Bergstrom2017-03-28
| | | | | | | | | | | | | | | | | | | Move code that touches host registers and instance block to fifo HAL. This involves adding HAL ops for the fifo HAL functions that get called from outside fifo. This clears responsibility of channel by leaving it only managing channels in software and push buffers. channel had member ramfc defined, but it was not used, to remove it. pbdma_acquire_val consisted both of channel logic and hardware programming. The channel logic was moved to the caller and only hardware programming was moved. Change-Id: Id005787f6cc91276b767e8e86325caf966913de9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1322423 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add and reorg reset_enable_hw fifo opsSeema Khowala2017-03-28
| | | | | | | | | | | | | | | | fifo reset_enable_hw is reorged to clear and enable pbdma/fifo interrupts after all the required configuration such as configuring timeouts, enabling timeout detections are taken care of. JIRA GPUT19X-74 JIRA GPUT19X-47 Change-Id: Id780cc11d858db18f8d748c037954ede73298506 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1325351 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: *ERROR_MMU_ERR_FLT* not set for fake mmu faultsSeema Khowala2017-03-23
| | | | | | | | | | | | | For fake faults, errror notifiers are expected to be set before triggering fake mmu fault. JIRA GPUT19X-7 Change-Id: I458af8d95c5960f20693b6923e1990fe3aa59857 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1323413 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add function to enable/disable runlists schedSeema Khowala2017-03-23
| | | | | | | | | | | | | | | -gk20a_fifo_set_runlist_state() can be used to enable/disable runlists scheduler. This change would be needed for t19x fifo recovery too -Also delete gk20a_fifo_disable_all_engine_activity function as it is not used anywhere. JIRA GPUT19X-7 Change-Id: I6bb9a7574a473327f0e47060f32d52cd90551c6d Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1315180 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add is_preempt_pending fifo opsSeema Khowala2017-03-21
| | | | | | | | | | | | | is_preempt_pending fifo ops is added as t19x preempt done sequence is differnt than legacy chips. Change-Id: I6b46be1f5b911ae11bbe806968cb8fabb21848e0 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1309678 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add fifo ops for intr_0_error_maskSeema Khowala2017-03-20
| | | | | | | | | | This change is required to support t19x mmu fault Change-Id: I3953dcf02c71ace606ba81896e56ea98683eb2ca Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1313482 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: channel_from_inst_ptr renamed and made non staticSeema Khowala2017-03-14
| | | | | | | | | | required to support t19x mmu fault Change-Id: Ibe621d924717696a359d7e2065beb6501a9f9b5e Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1315928 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: refactor interrupt handlingDavid Nieto2017-03-14
| | | | | | | | | | | | | | | | | | | | | JIRA: EVLR-1004 (*) Refactor the non-stalling interrupt path to execute clear on the top half, so on dGPU case processing of stalling interrupts does not block non-stalling one. (*) Use a worker thread to do semaphore wakeups and allow batching of the non-stalling operations. (*) Fix a bug where some gpus will not properly track the completion of interrupts, preventing safe driver unloads Change-Id: Icc90a3acba544c97ec6a9285ab235d337ab9eefa Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1312796 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Navneet Kumar <navneetk@nvidia.com>
* gpu: nvgpu: debug dump enablement for t19xSeema Khowala2017-03-09
| | | | | | | | | | | | Fifo ops added for dumping channel & ramfc status and pbdma & engine status. Change-Id: Icc739f4f05f0864721954489517fefdfa2fa608a Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1302369 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: in-kernel kickoff profilingDavid Nieto2017-03-07
| | | | | | | | | | | | | | | | | Add a debugfs interface to profile the kickoff ioctl it provides the probability distribution and separates the information between time spent in: the full ioctl, the kickoff function, the amount of time spent in job tracking and the amount of time doing pushbuffer copies JIRA: EVLR-1003 Change-Id: I9888b114c3fbced61b1cf134c79f7a8afce15f56 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1308997 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add fifo ops for *client_type_gpc_vSeema Khowala2017-03-07
| | | | | | | | | | *client_type_gpc_v is different for t19x Change-Id: Ic8f8eff2d98138a877ef95c6f7f40226f0d61a61 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1313436 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add pbdma and eng bitmask for runlistsSeema Khowala2017-02-27
| | | | | | | | | | | | | | | -Init pbdma and engine bit mask per runlist. -Organize debug info to print supported pbdma instances for particular runlist. JIRA GV11B-3 Change-Id: Ie34dd98ccbe2c779ca1c795855c2a7df4abd2715 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: http://git-master/r/1309706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use common nvgpu mutex/spinlock APIsDeepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using Linux APIs for mutex and spinlocks directly, use new APIs defined in <nvgpu/lock.h> Replace Linux specific mutex/spinlock declaration, init, lock, unlock APIs with new APIs e.g struct mutex is replaced by struct nvgpu_mutex and mutex_lock() is replaced by nvgpu_mutex_acquire() And also include <nvgpu/lock.h> instead of including <linux/mutex.h> and <linux/spinlock.h> Add explicit nvgpu/lock.h includes to below files to fix complilation failures. gk20a/platform_gk20a.h include/nvgpu/allocator.h Jira NVGPU-13 Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1293187 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL to set TSG timesliceThomas Fleury2017-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | Setting timeslice for virtualized case was not effective, because both ioctls NVGPU_TSG_IOCTL_SET_TIMESLICE and NVGPU_SCHED_IOCTL_TSG_SET_TIMESLICE were calling the native function to set TSG timeslice. - Fixed wrapper function to call HAL - Defined HAL function for "native" set TSG timeslice - Also, properly update timeout_us in TSG context, in virtualized case. This change also moves the min/max bounds checking for tsg timeslice into the native function implementation. There is no sysfs node for these parameters for vgpu, as RM server is ultimately responsible for this check. Bug 200263575 Change-Id: Ibceab9427561ad58ec28abfff0c96ca8f592bdb9 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1283180 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix signed comparison bugsTerje Bergstrom2016-11-17
| | | | | | | | | | | | Fix small problems related to signed versus unsigned comparisons throughout the driver. Bump up the warning level to prevent such problems from occuring in future. Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1252068 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: userd allocation from sysmemseshendra Gadagottu2016-10-11
| | | | | | | | | | | | | | | When bar1 memory is not supported then userd will be allocated from sysmem. Functions gp_get and gp_put are updated accordingly. JIRA GV11B-1 Change-Id: Ia895712a110f6cca26474228141488f5f8ace756 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1225384 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: create chip specific runlist entryseshendra Gadagottu2016-09-21
| | | | | | | | | | | | | To handle chip specific runlist entry size and structure, add and implement relevant functional pointers. Bug 1735760 Change-Id: I01f3ea78fb21d9fe30c82ba51ef24d7d95ebf90a Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1214473 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: When powering down, abort if not idleTerje Bergstrom2016-09-15
| | | | | | | | | | | | | | When trying to power down GPU the engine might be still busy. In this case delay power down by returning -EBUSY from gk20a_pm_runtime_suspend(). Bug 200224907 Change-Id: Ibad74c090add24a185bc1a7a02df367af9b95ced Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1213042 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use force_reset_ch in ch wdt handlerRichard Zhao2016-08-18
| | | | | | | | | | | | | | - let force_reset_ch pass down err code - force_reset_ch callback can cover vgpu too. Bug 1776876 JIRA VFND-2151 Change-Id: I48f7890294c6455247198e0cab5f21f83f61f0e1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1202255 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix deferred engine reset sequenceDeepak Nibade2016-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | We currently store fault_id into fifo.deferred_fault_engines and use that in gk20a_fifo_reset_engine() which is incorrect Also, in deferred engine reset path during channel close, we do not check if channel is loaded on engine or not fix this with below - store engine_id bits into fifo.deferred_fault_engines - define new API gk20a_fifo_deferred_reset() to perform deferred engine reset - get all engines on which channel is loaded with gk20a_fifo_engines_on_id() - for each set bit/engine_id in fifo.deferred_fault_engines, check if channel is loaded on that engine, and if yes, reset the engine Bug 1791696 Change-Id: I1b8b1a9e3aa538fe6903a352aa732b47c95ec7d5 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1195087 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add nvgpu infra to allow kernel to create privileged CE channelsLakshmanan M2016-07-20
| | | | | | | | | | | | | Added interface to allow kernel to create privileged CE channels for page migration and clearing support between sysmem and videmem. JIRA DNVGPU-53 Change-Id: I3e18d18403809c9e64fa45d40b6c4e3844992506 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1173085 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: vgpu: Add CE engine to engine listTerje Bergstrom2016-06-24
| | | | | | | | | | | | | | | | | Add CE engine to vgpu engine list. CE engine is defined differently for different GPUs, so we also add HAL for initializing the engine info. Bug 1780185 Change-Id: I5ae265551feac08d0c4d45402dd3277514e62b2d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1169720 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Tested-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com>
* gpu: nvgpu: Add uapi support for non-graphics enginesLakshmanan M2016-06-13
| | | | | | | | | | | | | | | | | | | Extend the existing NVGPU_GPU_IOCTL_OPEN_CHANNEL interface to allow opening channels for other than the primary (i.e., the graphics) runlists. This is required to push work to dGPU engines that have their own runlists, such as the asynchronous copy engines and the multimedia engines. Minor change - Added active_engines_list allocation and assignment for fifo_vgpu back end. JIRA DNVGPU-25 Change-Id: I3ed377e2c9a2b4dd72e8256463510a62c64e7a8f Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1161541 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add multiple engine and runlist supportLakshmanan M2016-06-07
| | | | | | | | | | | | | | | | | | | | | | | This CL covers the following modification, 1) Added multiple engine_info support 2) Added multiple runlist_info support 3) Initial changes for ASYNC CE support 4) Added ASYNC CE interrupt handling support for gm206 GPU family 5) Added generic mechanism to identify the CE engine pri_base address for gm206 (CE0, CE1 and CE2) 6) Removed hard coded engine_id logic and made generic way 7) Code cleanup for readability JIRA DNVGPU-26 Change-Id: I2c3846c40bcc8d10c2dfb225caa4105fc9123b65 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1155963 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix TSG abort sequenceDeepak Nibade2016-06-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In gk20a_fifo_abort_tsg(), we loop through channels of TSG and call gk20a_channel_abort() for each channel This is incorrect since we disable and preempt each channel separately, whereas we should disable all channels at once and use TSG specific API to preempt TSG Fix this with below sequence : - gk20a_disable_tsg() to disable all channels - preempt tsg if required - for each channel in TSG - set has_timedout flag - call gk20a_channel_abort_clean_up() to clean up channel state Also, separate out common gk20a_channel_abort_clean_up() API which can be called from both channel and TSG abort routines In gk20a_channel_abort(), call gk20a_fifo_abort_tsg() if the channel is part of TSG Add new argument "preempt" to gk20a_fifo_abort_tsg() and preempt TSG if flag is set Bug 200205041 Change-Id: I4eff5394d26fbb53996f2d30b35140b75450f338 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1157190 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add device_info_data supportLakshmanan M2016-05-27
| | | | | | | | | | | | | Added device_info_data parsing support for maxwell GPU series. JIRA DNVGPU-26 Change-Id: I06dbec6056d4c26501e607c2c3d67ef468d206f4 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1151602 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Read all fields of device_infoTerje Bergstrom2016-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | We were not using the engine_type field in device info, and the code did not handle chained entries properly. The code assumed that first entry is for graphics and second for CE, which is not always true. Improve the code to go through all entries of device_info, and preserve values across entries until we reach the last entry. Only last entry triggers a write to fifo engine info. There can also be multiple engines with same type, so accumulate interrupts and reset ids from all of them. As the code got fixed, now it reads the engine enum correctly from hardware. We used to compare that against CE0, but we should compare against CE2. gk20a_fifo_reset_engine() uses wrong constants - it is passed a internal numbering of engines, but it compares them against hardware engine enum. Change-Id: Ia59273921c602d2a090f7a5b1404afb0fca2532c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1147746 Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: improve channel interleave supportAingara Paramakuru2016-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, only "high" priority bare channels were interleaved between all other bare channels and TSGs. This patch decouples priority from interleaving and introduces 3 levels for interleaving a bare channel or TSG: high, medium, and low. The levels define the number of times a channel or TSG will appear on a runlist (see nvgpu.h for details). By default, all bare channels and TSGs are set to interleave level low. Userspace can then request the interleave level to be increased via the CHANNEL_SET_RUNLIST_INTERLEAVE ioctl (TSG-specific ioctl will be added later). As timeslice settings will soon be coming from userspace, the default timeslice for "high" priority channels has been restored. JIRA VFND-1302 Bug 1729664 Change-Id: I178bc1cecda23f5002fec6d791e6dcaedfa05c0c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1014962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: separate API to issue preemptDeepak Nibade2016-02-05
| | | | | | | | | | | | | | Export separate API gk20a_fifo_issue_preempt() to issue preempt request to a channel or TSG Bug 200156699 Change-Id: Ib3b097ef66a6411d75c1fe213cdbe8b1d08d3418 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/935771 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support preprocessing of SM exceptionsDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | Support preprocessing of SM exceptions if API pointer pre_process_sm_exception() is defined Also, expose some common APIs Bug 200156699 Change-Id: I1303642c1c4403c520b62efb6fd83e95eaeb519b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/925883 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add high priority channel interleavePeter Pipkorn2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interleave all high priority channels between all other channels. This reduces the latency for high priority work when there are a lot of lower priority work present, imposing an upper bound on the latency. Change the default high priority timeslice from 5.2ms to 3.0 in the process, to prevent long running high priority apps from hogging the GPU too much. Introduce a new debugfs node to enable/disable high priority channel interleaving. It is currently enabled by default. Adds new runlist length max register, used for allocating suitable sized runlist. Limit the number of interleaved channels to 32. This change reduces the maximum time a lower priority job is running (one timeslice) before we check that high priority jobs are running. Tested with gles2_context_priority (still passes) Basic sanity testing is done with graphics_submit (one app is high priority) Also more functional testing using lots of parallel runs with: NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 –finish plus multiple: NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 -finish Previous to this change, the relative performance between high priority work and normal priority work comes down to timeslice value. This means that when there are many low priority channels, the high priority work will still drop quite a lot. But with this change, the high priority work will roughly get about half the entire GPU time, meaning that after the initial lower performance, it is less likely to get lower in performance due to more apps running on the system. This change makes a large step towards real priority levels. It is not perfect and there are no guarantees on anything, but it is a step forwards without any additional CPU overhead or other complications. It will also serve as a baseline to judge other algorithms against. Support for priorities with TSG is future work. Support for interleave mid + high priority channels, instead of just high, is also future work. Bug 1419900 Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98 Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com> Reviewed-on: http://git-master/r/805961 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: set aggressive_sync_destroy at runtimeDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | We currently set "aggressive_destroy" flag to destroy sync object statically and for each sync object Move this flag to per-platform structure so that it can be set per-platform for all the sync objects Also, set the default value of this flag as "false" and set it to "true" once we have more than 64 channels in use Bug 200141116 Change-Id: I1bc271df4f468a4087a06a27c7289ee0ec3ef29c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/822041 (cherry picked from commit 98741e7e88066648f4f14490c76b61dbff745103) Reviewed-on: http://git-master/r/835800 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: implement per-channel watchdogDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement per-channel watchdog/timer as per below rules : - start the timer while submitting first job on channel or if no timer is already running - cancel the timer when job completes - re-start the timer if there is any incomplete job left in the channel's queue - trigger appropriate recovery method as part of timeout handling mechanism Handle the timeout as per below : - get timed out channel, and job data - disable activity on all engines - check if fence is really pending - get information on failing engine - if no engine is failing, just abort the channel - if engine is failing, trigger the recovery Also, add flag "ch_wdt_enabled" to enable/disable channel watchdog mechanism. Watchdog can also be disabled using global flag "timeouts_enabled" Set the watchdog time to be 5s using macro NVGPU_CHANNEL_WATCHDOG_DEFAULT_TIMEOUT_MS Bug 200133289 Change-Id: I401cf14dd34a210bc429f31bd5216a361edf1237 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/797072 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: APIs to disable/enable all engines' activityDeepak Nibade2015-09-28
| | | | | | | | | | | | | | | Add below APIs to disable/re-enable activity on all engines gk20a_fifo_disable_all_engine_activity() gk20a_fifo_enable_all_engine_activity() Bug 200133289 Change-Id: Ie01a260d587807a3c1712ee32fe870fbcb08f9cd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/798747 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>