summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/channel_gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Fold T19x code back to main code pathsTerje Bergstrom2018-01-23
| | | | | | | | | | | | | Lots of code paths were split to T19x specific code paths and structs due to split repository. Now that repositories are merged, fold all of them back to main code paths and structs and remove the T19x specific Kconfig flag. Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1640606 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make graphics context property of TSGTerje Bergstrom2018-01-17
| | | | | | | | | | | | | | | | | | | | Move graphics context ownership to TSG instead of channel. Combine channel_ctx_gk20a and gr_ctx_desc to one structure, because the split between them was arbitrary. Move context header to be property of channel. Bug 1842197 Change-Id: I410e3262f80b318d8528bcbec270b63a2d8d2ff9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639532 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: correct function arguments to fix QNX compilationSourab Gupta2018-01-09
| | | | | | | | | | | | | | | The patch changes the function argument from 'int' to 'unsigned int' to fix the QNX compilation failures. Change-Id: Iaee7850d8310bea693996ac618b95252ca5d1b35 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1626397 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove bare channel schedulingTerje Bergstrom2018-01-02
| | | | | | | | | | | | | | | | Remove scheduling IOCTL implementations for bare channels. Also removes code that constructs bare channels in runlist. Bug 1842197 Change-Id: I6e833b38e24a2f2c45c7993edf939d365eaf41f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1627326 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for channel eventsTerje Bergstrom2017-12-28
| | | | | | | | | | | | | | Remove support for events for bare channels. All users have already moved to TSGs and TSG events. Bug 1842197 Change-Id: Ib3ff68134ad9515ee761d0f0e19a3150a0b744ab Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1618906 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove support for bare channelsTerje Bergstrom2017-12-28
| | | | | | | | | | | | | | | Remove remaining support for bare channels. All users of bare channels have already moved to TSGs. Bug 1842197 Change-Id: I1ff12677253b160dac9bebe6925ad0839ea07cfc Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1618905 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Disallow use of bare channelsTerje Bergstrom2017-12-12
| | | | | | | | | | | | | All channels need to now be wrapped in TSGs. Disallow use of bare channels by preventing creation of GPFIFO for them. Bug 1842197 Change-Id: Id0ebee4c590804b96c09f8951e35ba2680b596e7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1612697 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: cleanup uapi header includesDeepak Nibade2017-11-28
| | | | | | | | | | | | | | | | | | | | | With recent rework in nvgpu most of the <uapi/linux/nvgpu.h> includes are not needed so remove them Remove use of NVGPU_DBG_GPU_REG_OP_* in gk20a/gr_gk20a.c and use common definition instead Remove use of NVGPU_ALLOC_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE in gp10b/fifo_gp10b.c by defining new common flag NVGPU_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE and then parsing it in API nvgpu_gpfifo_user_flags_to_common_flags() Jira NVGPU-363 Change-Id: I8e653275ea3f443f24be7284d54f2115636aba3f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1606108 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move cycle state buffer handler to linuxDeepak Nibade2017-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | We use dma_buf pointer cyclestate_buffer_handler in common code But since this is linux specific, we need to move this out of common code and into linux specific code Move dma_buf pointer cyclestate_buffer_handler from common channel code to struct nvgpu_channel_linux Fix all pointer accesses to this handle Move gk20a_channel_free_cycle_stats_buffer() to ioctl_channel.c since it is mostly linux specific And since gk20a_channel_free_cycle_stats_buffer() needs to be called while closing the channel, call it from nvgpu_channel_close_linux() Jira NVGPU-397 Jira NVGPU-415 Change-Id: Ifb429e49b8f7a1c9e2bc757f3efdd50b28ceca1f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1603909 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move snapshot_client memory handling to linuxDeepak Nibade2017-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now store dmabuf fd and dma_buf pointer for gk20a_cs_snapshot_client But since dma_buf and all related APIs are linux specific, we need to remove them from common code and move them to linux specific code Add new linux specific structure gk20a_cs_snapshot_client_linux which includes struct gk20a_cs_snapshot_client and linux specific dma_buf pointer In gk20a_attach_cycle_stats_snapshot(), we first handle all dma_buf related operations and then call gr_gk20a_css_attach() Move gk20a_channel_free_cycle_stats_snapshot() to ioctl_channel.c In gk20a_channel_free_cycle_stats_snapshot(), we call gr_gk20a_css_detach() and then free up dma_buf in linux specific code We also need to call gk20a_channel_free_cycle_stats_snapshot() while closing the channel, so call it from linux specific nvgpu_channel_close_linux() Jira NVGPU-397 Jira NVGPU-415 Change-Id: Ida27240541f6adf31f28d7d7ee4f51651c6d3de2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1603908 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: define error_notifiers in common codeDeepak Nibade2017-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All the linux specific error_notifier codes are defined in linux specific header file <uapi/linux/nvgpu.h> and used in all the common driver But since they are defined in linux specific file, we need to move all the uses of those error_notifiers in linux specific code only Hence define new error_notifiers in include/nvgpu/error_notifier.h and use them in the common code Add new API nvgpu_error_notifier_to_channel_notifier() to convert common error_notifier of the form NVGPU_ERR_NOTIFIER_* to linux specific error notifier of the form NVGPU_CHANNEL_* Any future additions to error notifiers requires update to both the form of error notifiers Move all error notifier related metadata from channel_gk20a (common code) to linux specific structure nvgpu_channel_linux Update all accesses to this data from new structure instead of channel_gk20a Move and rename below APIs to linux specific file and declare them in error_notifier.h nvgpu_set_error_notifier_locked() nvgpu_set_error_notifier() nvgpu_is_error_notifier_set() Add below new API and use it in fifo_vgpu.c nvgpu_set_error_notifier_if_empty() Include <nvgpu/error_notifier.h> wherever new error_notifier codes are used NVGPU-426 Change-Id: Iaa5bfc150e6e9ec17d797d445c2d6407afe9f4bd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593361 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use submit callback only in linux codeKonsta Holtta2017-11-22
| | | | | | | | | | | | | | | | | Move the implementation for channel job update callbacks that is based on Linux specific work_struct usage to Linux-specific code. This requires a bit of extra work for allocating OS-specific priv data for channels which is also done in this patch. The priv data will be used more when more OS-specific features are moved. Jira NVGPU-259 Change-Id: I24bc0148a827f375b56a1c96044685affc2d1e8c Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589321 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove NVGPU_ALLOC_OBJ_FLAGS_* from common codeDeepak Nibade2017-11-10
| | | | | | | | | | | | | | | | | | | | In gr_gp10b_alloc_gr_ctx(), we use linux specific flags NVGPU_ALLOC_OBJ_FLAGS_* Since common code should be independent of linux specific code, define new flags NVGPU_OBJ_CTX_FLAGS_SUPPORT_* in common code and use them wherever needed Linux code will parse the user flags and send appropriate flags to g->ops.gr.alloc_obj_ctx() Also remove use of NVGPU_ALLOC_OBJ_FLAGS_LOCKBOOST_ZERO since this seems to be deadcode anyways Jira NVGPU-382 Change-Id: Id82efe0d46ddc3e2c063610025ea57f283bc3510 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594452 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove NVGPU_ALLOC_GPFIFO_EX_FLAGS_* from common codeDeepak Nibade2017-11-10
| | | | | | | | | | | | | | | | | | | | | In gk20a_channel_alloc_gpfifo(), we use linux specific flags NVGPU_ALLOC_GPFIFO_EX_FLAGS_* Since common code should be independent of linux specific code, define new flags NVGPU_GPFIFO_FLAGS_SUPPORT_* in common code and use them in gk20a_channel_alloc_gpfifo() Linux code will parse the user flags and send appropriate flags to gk20a_channel_alloc_gpfifo() Jira NVGPU-381 Change-Id: Ibec51903b3407175fbba727208483b0dc36a5772 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594422 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support tuning per-ch deterministic optsKonsta Holtta2017-11-06
| | | | | | | | | | | | | | | | Add a new ioctl NVGPU_GPU_IOCTL_SET_DETERMINISTIC_OPTS to adjust deterministic options on a per-channel basis. Currently, the only supported option is to relax the no-railgating requirement on open deterministic channels. This also disallows submits on such channels, until the railgate option is reset. Bug 200327089 Change-Id: If4f0f51fd1d40ad7407d13638150d7402479aff0 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1554563 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: protect stacktrace.h include with configDeepak Nibade2017-11-06
| | | | | | | | | | | | | | | | struct stack_trace is protected with config GK20A_CHANNEL_REFCOUNT_TRACKING and hence protect linux/stacktrace.h header include in gk20a/channel_gk20a.h with same config Jira NVGPU-259 Change-Id: I365a4faa7eb071dd559e9b27fe03377dede7484d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1592603 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move submit path to linuxDeepak Nibade2017-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvgpu submit path has a lot of dependency on Linux framework e.g. use of copy_from_user, use of structures defined in uapi/nvgpu headers, dma_buf_* calls for trace support etc Hence to keep common code independent of Linux code, move submit path to Linux directory Move below APIs to common/linux/channel.c trace_write_pushbuffer() trace_write_pushbuffer_range() gk20a_submit_prepare_syncs() gk20a_submit_append_priv_cmdbuf() gk20a_submit_append_gpfifo() gk20a_submit_channel_gpfifo() Move below APIs to common/linux/ce2.c gk20a_ce_execute_ops() Define gk20a_ce_execute_ops() in common/linux/ce2.c, and declare it in gk20a/ce2_gk20a.h since it is needed in common/mm code too Each OS needs to implement this API separately gk20a_channel_alloc_gpfifo() use sizeof(nvgpu_gpfifo) to get size of one gpfifo entry, but structure nvgpu_gpfifo is linux specific Define new nvgpu_get_gpfifo_entry_size() in linux specific code and use it in gk20a_channel_alloc_gpfifo() to get gpfifo entry size Each OS needs to implement this API separately Export some APIs from gk20a/ce2_gk20a.h and gk20a/channel_gk20a.h that are needed in linux code Jira NVGPU-259 Jira NVGPU-313 Change-Id: I360c6cb8ce4494b1e50c66af334a2a379f0d2dc4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586277 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: replace wait_queue_head_t with nvgpu_condDebarshi Dutta2017-10-16
| | | | | | | | | | | | | Replace existing usages of wait_queue_head_t with struct nvgpu_cond and using the corresponding APIs in order to reduce Linux dependencies in NVGPU. JIRA NVGPU-205 Change-Id: I85850369c3c47d3e1704e4171b1d172361842423 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575778 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: protect stack_trace with configDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | We use struct stack_trace in struct channel_gk20a_ref_action But since channel_gk20a_ref_action is needed only if GK20A_CHANNEL_REFCOUNT_TRACKING is set, protect it with that config Jira NVGPU-259 Change-Id: I6b2d6f470bf924bb1ddfd31ba9968b56c63c2372 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576929 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: clean up channel open/release declaresDeepak Nibade2017-10-10
| | | | | | | | | | | | | | | | | | | Below APIs are already declared in ioctl_channel.h, and hence remove duplicate declaration from channel_gk20a.h gk20a_channel_open() gk20a_channel_ioctl() gk20a_channel_release() And move declaration of gk20a_channel_open_ioctl() from channel_gk20a.h to ioctl_channel.h Jira NVGPU-259 Change-Id: I46702ca481e41a19f92f4fe0169f95e31360abe0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573106 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Change license for common files to MITTerje Bergstrom2017-09-26
| | | | | | | | | | | | Change license of OS independent source code files to MIT. JIRA NVGPU-218 Change-Id: I1474065f4b552112786974a16cdf076c5179540e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1565880 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix channel unbind sequence from TSGDeepak Nibade2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now remove a channel from TSG list and disable all the channels in TSG while removing a channel from TSG With this sequence if any one channel in TSG is closed, rest of the channels are set as timed out and cannot be used anymore We need to fix this sequence as below to allow removing a channel from active TSG so that rest of the channels can still be used - disable all channels of TSG - preempt TSG - check if CTX_RELOAD is set if support is available if CTX_RELOAD is set on channel, it should be moved to some other channel - check if FAULTED is set if support is available - if NEXT is set on channel then it means channel is still active print out an error in this case for the time being until properly handled - remove the channel from runlist - remove channel from TSG list - re-enable rest of the channels in TSG - clean up the channel (same as regular channels) Add below fifo operations to support checking channel status g->ops.fifo.tsg_verify_status_ctx_reload g->ops.fifo.tsg_verify_status_faulted Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106 as gm20b_fifo_tsg_verify_status_ctx_reload() This API will check if channel to be released has CTX_RELOAD set, if yes CTX_RELOAD needs to be moved to some other channel in TSG Remove static from channel_gk20a_update_runlist() and export it Bug 200327095 Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560637 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add wrapper over atomic_t and atomic64_tDebarshi Dutta2017-08-17
| | | | | | | | | | | | | | | | | - added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over atomic_t and atomic64_t - added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above wrappers. JIRA NVGPU-121 Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1533044 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Fix gr ctx unmap logicAlex Waterman2017-07-27
| | | | | | | | | | | | | The GR context buffers were not being properly unmapped. The awkward VPR vs non-VPR context setup requires some extra checks when determining which nvgpu_mem is associated with what GPU VA (which are tracked separately in a different sized array). Change-Id: I4c7be1c5b7835aea4309a142df5b0bdfaae91e4c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1524689 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support for t19x tsg/channelseshendra Gadagottu2017-07-05
| | | | | | | | | | | | | | Required modifications to add t19x channel specific info and handle t19x tsg requests. Bug 1842197 Change-Id: I0f8bcce20edea8f2f9a01e5bf5a9e4181af54875 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master/r/1511144 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rename hw_chid to chidRichard Zhao2017-06-30
| | | | | | | | | | | | | hw_chid is a relative id for vgpu. For native it's same as hw id. Renaming it to chid to avoid confusing. Jira VFND-3796 Change-Id: I1c7924da1757330ace715a7c52ac61ec9dc7065c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master/r/1509530 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use time API in channel ref action debugKonsta Holtta2017-06-19
| | | | | | | | | | | | | Save the time using nvgpu_current_time_ms() instead of the Linux-specific jiffies counter. Jira NVGPU-83 Change-Id: I19b4296d8b64ddf52506144e77d151f668ff7838 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1503002 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: hold power ref for deterministic channelsKonsta Holtta2017-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support deterministic channels even with platforms where railgating is supported, have each deterministic-marked channel hold a power reference during their lifetime, and skip taking power refs for jobs in submit path for those. Previously, railgating blocked deterministic submits in general because of gk20a_busy()/gk20a_idle() calls in submit path possibly taking time and more significantly because the gpu may need turning on which takes a nondeterministic and long amount of time. As an exception, gk20a_do_idle() can still block deterministic submits until gk20a_do_unidle() is called. Add a rwsem to guard this. VPR resize needs do_idle, which conflicts with deterministic channels' requirement to keep the GPU on. This is documented in the ioctl header now. Make NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_NO_JOBTRACKING always set in the gpu characteristics now that it's supported. The only thing left now blocking NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_FULL is the sync framework. Make the channel debug dump show which channels are deterministic. Bug 200291300 Jira NVGPU-70 Change-Id: I47b6f3a8517cd6e4255f6ca2855e3dd912e4f5f3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1483038 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move error notifier free to Linux moduleTerje Bergstrom2017-06-08
| | | | | | | | | | | | | | | Freeing error notifier involves calling dma_buf API, which is Linux specific. Move the free to happen in Linux specific channel close path. JIRA NVGPU-65 Change-Id: Ifd8b31bb8c8af13975c34add00f51dd869cfd76a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1498583 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: Use nvgpu_cond in notifier wqTerje Bergstrom2017-06-05
| | | | | | | | | | | | | Change notifier wait queue to use nvgpu_cond instead of Linux wait queue. JIRA NVGPU-14 Change-Id: I197a0ef6c0a2331ca0dbb3480bdb89d45ba73020 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1469853 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Track also pushbuf get for watchdogKonsta Holtta2017-05-24
| | | | | | | | | | | | | | | Make the watchdog notice also fine-grained changes within a single pushbuffer - by tracking just the gpfifo get, the watchdog could wake when the channel hasn't really been stuck but processing a relatively large or slow pushbuf. Jira NVGPU-72 Change-Id: I15374eea5d9abc9d3725a79d0b960503237e478c Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1485919 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add ioctls to get current timesliceThomas Fleury2017-05-24
| | | | | | | | | | | | | | | | | | Add the following ioctls - NVGPU_CHANNEL_IOCTL_GET_TIMESLICE for channel timeslice in us - NVGPU_TSG_IOCTL_GET_TIMESLICE for TSG timeslice in us If timeslice has not been set explicitly, ioctl returns the default timeslice that will be used when programming the runlist entry. Bug 1883271 Change-Id: Ib18fdd836323b1a2d4efceb1e27d07713bd6fca5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1469040 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VM implementation outAlex Waterman2017-05-19
| | | | | | | | | | | | | | | | | | | | This patch begins splitting out the VM implementation from mm_gk20a.c and moves it to common/linux/vm.c and common/mm/vm.c. This split is necessary because the VM code has two portions: first, an interface for the OS specific code to use (i.e userspace mappings), and second, a set of APIs for the driver to use (init, cleanup, etc) which are not OS specific. This is only the beginning of the split - there's still a lot of things that need to be carefully moved around. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I3b57cba245d7daf9e4326a143b9c6217e0f28c96 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477743 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use nvgpu_cond in semaphore wqTerje Bergstrom2017-05-16
| | | | | | | | | | | | | Change semaphore wait queue to use nvgpu_cond instead of Linux wait queue. JIRA NVGPU-14 Change-Id: I3be5097ded168300b4480e986218d9f4fd6104b1 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1469852 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use nvgpu_cond for channel refcountTerje Bergstrom2017-05-12
| | | | | | | | | | | | | | | Use nvgpu_cond for waiting for all channel accesses to finalize before closing a channel, and for signalling for the same event. JIRA NVGPU-14 Change-Id: Ifac14ad9afe5c44d4443b4a4a94a4d0ad2ea7053 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1469764 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Lakshmanan M <lm@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: use nvgpu list for dynamic joblistDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs for dynamic joblist Jira NVGPU-13 Change-Id: I53779037589b1b6260d877d3bc9bd611ea9831ba Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460576 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for channel worker itemDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel worker items Jira NVGPU-13 Change-Id: I01d214810ca2495bd0a644dd1a2816ab8e526981 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460575 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for event id listDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store event IDs into channel and TSGs Jira NVGPU-13 Change-Id: I51e4b6ab3b38c845a870901b4d498927ca404a78 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460574 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for channel and debug session listsDeepak Nibade2017-04-10
| | | | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel list in debug session and to store debug session list in channel Jira NVGPU-13 Change-Id: Iaf89524955a155adcb8a24505df6613bd9c4ccfb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454690 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: rename mem_desc to nvgpu_memAlex Waterman2017-04-06
| | | | | | | | | | | | | | | | | Renaming was done with the following command: $ find -type f | \ xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g' Also rename mem_desc.[ch] to nvgpu_mem.[ch]. JIRA NVGPU-12 Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move TSG IOCTL code to Linux moduleTerje Bergstrom2017-04-04
| | | | | | | | | | | | | | | | Move TSG IOCTL specific code to Linux module. This clears most Linux dependencies from tsg_gk20a.c. Move also remaining file_operations declarations from channel_gk20a.h to ioctl_channel.h. JIRA NVGPU-32 Change-Id: Idcc2a525ebe12b30db46c3893a2735509c41ff39 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1330805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move channel IOCTL code to Linux moduleTerje Bergstrom2017-04-04
| | | | | | | | | | | | | Move channel IOCTL specific code to Linux module. This clears some Linux dependencies from channel_gk20a.c. JIRA NVGPU-32 Change-Id: I41817d612b959709365bcabff9c8a15f2bfe4c60 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1330804 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list to store ch in TSGDeepak Nibade2017-04-03
| | | | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel entries in TSG Jira NVGPU-13 Change-Id: I2f64fffc5c43487e1c9e6ccef59c60f079c09da4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454014 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use new List APIs to free channelsDeepak Nibade2017-03-31
| | | | | | | | | | | | | | | | | Use new APIs from <nvgpu/list.h> to access free channel list Define channel_gk20a_from_free_chs() to convert a list node to struct channel_gk20a Jira NVGPU-13 Change-Id: Idaf58f04be1c7fc553bea7c8de45951bf82bb340 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1303025 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move programming of host registers to fifoTerje Bergstrom2017-03-28
| | | | | | | | | | | | | | | | | | | Move code that touches host registers and instance block to fifo HAL. This involves adding HAL ops for the fifo HAL functions that get called from outside fifo. This clears responsibility of channel by leaving it only managing channels in software and push buffers. channel had member ramfc defined, but it was not used, to remove it. pbdma_acquire_val consisted both of channel logic and hardware programming. The channel logic was moved to the caller and only hardware programming was moved. Change-Id: Id005787f6cc91276b767e8e86325caf966913de9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1322423 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: in-kernel kickoff profilingDavid Nieto2017-03-07
| | | | | | | | | | | | | | | | | Add a debugfs interface to profile the kickoff ioctl it provides the probability distribution and separates the information between time spent in: the full ioctl, the kickoff function, the amount of time spent in job tracking and the amount of time doing pushbuffer copies JIRA: EVLR-1003 Change-Id: I9888b114c3fbced61b1cf134c79f7a8afce15f56 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1308997 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add worker for watchdog and job cleanupKonsta Holtta2017-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a worker thread to replace the delayed works in channel watchdog and job cleanups. Watchdog runs by polling the channel states periodically, and job cleanup is performed on channels that are appended on a work queue consumed by the worker thread. Handling both of these two in the same thread makes it impossible for them to cause a deadlock, as has previously happened. The watchdog takes references to channels during checking and possibly recovering channels. Jobs in the cleanup queue have an additional reference taken which is released after the channel is processed. The worker is woken up from periodic sleep when channels are added to the queue. Currently, the queue is only used for job cleanups, but it is extendable for other per-channel works too. The worker can also process other periodic actions dependent on channels. Neither the semantics of timeout handling or of job cleanups are yet significantly changed - this patch only serializes them into one background thread. Each job that needs cleanup is tracked and holds a reference to its channel and a power reference, and timeouts can only be processed on channels that are tracked, so the thread will always be idle if the system is going to be suspended, so there is currently no need to explicitly suspend or stop it. Bug 1848834 Bug 1851689 Bug 1814773 Bug 200270332 Jira NVGPU-21 Change-Id: I355101802f50841ea9bd8042a017f91c931d2dc7 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1297183 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use common nvgpu mutex/spinlock APIsDeepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using Linux APIs for mutex and spinlocks directly, use new APIs defined in <nvgpu/lock.h> Replace Linux specific mutex/spinlock declaration, init, lock, unlock APIs with new APIs e.g struct mutex is replaced by struct nvgpu_mutex and mutex_lock() is replaced by nvgpu_mutex_acquire() And also include <nvgpu/lock.h> instead of including <linux/mutex.h> and <linux/spinlock.h> Add explicit nvgpu/lock.h includes to below files to fix complilation failures. gk20a/platform_gk20a.h include/nvgpu/allocator.h Jira NVGPU-13 Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1293187 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move from gk20a_ to nvgpu_ in semaphore codeAlex Waterman2017-02-13
| | | | | | | | | | | | | Change the prefix in the semaphore code to 'nvgpu_' since this code is global to all chips. Bug 1799159 Change-Id: Ic1f3e13428882019e5d1f547acfe95271cc10da5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284628 Reviewed-by: Varun Colbert <vcolbert@nvidia.com> Tested-by: Varun Colbert <vcolbert@nvidia.com>
* gpu: nvgpu: Base channel watchdog on gp_getTerje Bergstrom2017-01-30
| | | | | | | | | | | | | | | | | | Instead of checking if a job is complete, only check that channel is making progress by checking its gp_get is advancing. This will make the watchdog conservative. Previously a whole job had x seconds to complete. Now channel has x seconds to get host to consume each push buffer segment. Bug 1861838 Bug 200273419 Bug 200263100 Change-Id: I70adc1f50301bce8db7dac675771c251c0f11b70 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1294850 Reviewed-by: Automatic_Commit_Validation_User