summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/channel_gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: use nvgpu list for dynamic joblistDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs for dynamic joblist Jira NVGPU-13 Change-Id: I53779037589b1b6260d877d3bc9bd611ea9831ba Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460576 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for channel worker itemDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel worker items Jira NVGPU-13 Change-Id: I01d214810ca2495bd0a644dd1a2816ab8e526981 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460575 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for event id listDeepak Nibade2017-04-12
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store event IDs into channel and TSGs Jira NVGPU-13 Change-Id: I51e4b6ab3b38c845a870901b4d498927ca404a78 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1460574 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for channel and debug session listsDeepak Nibade2017-04-10
| | | | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel list in debug session and to store debug session list in channel Jira NVGPU-13 Change-Id: Iaf89524955a155adcb8a24505df6613bd9c4ccfb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454690 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: rename mem_desc to nvgpu_memAlex Waterman2017-04-06
| | | | | | | | | | | | | | | | | Renaming was done with the following command: $ find -type f | \ xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g' Also rename mem_desc.[ch] to nvgpu_mem.[ch]. JIRA NVGPU-12 Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move TSG IOCTL code to Linux moduleTerje Bergstrom2017-04-04
| | | | | | | | | | | | | | | | Move TSG IOCTL specific code to Linux module. This clears most Linux dependencies from tsg_gk20a.c. Move also remaining file_operations declarations from channel_gk20a.h to ioctl_channel.h. JIRA NVGPU-32 Change-Id: Idcc2a525ebe12b30db46c3893a2735509c41ff39 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1330805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move channel IOCTL code to Linux moduleTerje Bergstrom2017-04-04
| | | | | | | | | | | | | Move channel IOCTL specific code to Linux module. This clears some Linux dependencies from channel_gk20a.c. JIRA NVGPU-32 Change-Id: I41817d612b959709365bcabff9c8a15f2bfe4c60 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1330804 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list to store ch in TSGDeepak Nibade2017-04-03
| | | | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store channel entries in TSG Jira NVGPU-13 Change-Id: I2f64fffc5c43487e1c9e6ccef59c60f079c09da4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454014 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use new List APIs to free channelsDeepak Nibade2017-03-31
| | | | | | | | | | | | | | | | | Use new APIs from <nvgpu/list.h> to access free channel list Define channel_gk20a_from_free_chs() to convert a list node to struct channel_gk20a Jira NVGPU-13 Change-Id: Idaf58f04be1c7fc553bea7c8de45951bf82bb340 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1303025 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move programming of host registers to fifoTerje Bergstrom2017-03-28
| | | | | | | | | | | | | | | | | | | Move code that touches host registers and instance block to fifo HAL. This involves adding HAL ops for the fifo HAL functions that get called from outside fifo. This clears responsibility of channel by leaving it only managing channels in software and push buffers. channel had member ramfc defined, but it was not used, to remove it. pbdma_acquire_val consisted both of channel logic and hardware programming. The channel logic was moved to the caller and only hardware programming was moved. Change-Id: Id005787f6cc91276b767e8e86325caf966913de9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1322423 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: in-kernel kickoff profilingDavid Nieto2017-03-07
| | | | | | | | | | | | | | | | | Add a debugfs interface to profile the kickoff ioctl it provides the probability distribution and separates the information between time spent in: the full ioctl, the kickoff function, the amount of time spent in job tracking and the amount of time doing pushbuffer copies JIRA: EVLR-1003 Change-Id: I9888b114c3fbced61b1cf134c79f7a8afce15f56 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1308997 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add worker for watchdog and job cleanupKonsta Holtta2017-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a worker thread to replace the delayed works in channel watchdog and job cleanups. Watchdog runs by polling the channel states periodically, and job cleanup is performed on channels that are appended on a work queue consumed by the worker thread. Handling both of these two in the same thread makes it impossible for them to cause a deadlock, as has previously happened. The watchdog takes references to channels during checking and possibly recovering channels. Jobs in the cleanup queue have an additional reference taken which is released after the channel is processed. The worker is woken up from periodic sleep when channels are added to the queue. Currently, the queue is only used for job cleanups, but it is extendable for other per-channel works too. The worker can also process other periodic actions dependent on channels. Neither the semantics of timeout handling or of job cleanups are yet significantly changed - this patch only serializes them into one background thread. Each job that needs cleanup is tracked and holds a reference to its channel and a power reference, and timeouts can only be processed on channels that are tracked, so the thread will always be idle if the system is going to be suspended, so there is currently no need to explicitly suspend or stop it. Bug 1848834 Bug 1851689 Bug 1814773 Bug 200270332 Jira NVGPU-21 Change-Id: I355101802f50841ea9bd8042a017f91c931d2dc7 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1297183 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use common nvgpu mutex/spinlock APIsDeepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using Linux APIs for mutex and spinlocks directly, use new APIs defined in <nvgpu/lock.h> Replace Linux specific mutex/spinlock declaration, init, lock, unlock APIs with new APIs e.g struct mutex is replaced by struct nvgpu_mutex and mutex_lock() is replaced by nvgpu_mutex_acquire() And also include <nvgpu/lock.h> instead of including <linux/mutex.h> and <linux/spinlock.h> Add explicit nvgpu/lock.h includes to below files to fix complilation failures. gk20a/platform_gk20a.h include/nvgpu/allocator.h Jira NVGPU-13 Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1293187 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move from gk20a_ to nvgpu_ in semaphore codeAlex Waterman2017-02-13
| | | | | | | | | | | | | Change the prefix in the semaphore code to 'nvgpu_' since this code is global to all chips. Bug 1799159 Change-Id: Ic1f3e13428882019e5d1f547acfe95271cc10da5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284628 Reviewed-by: Varun Colbert <vcolbert@nvidia.com> Tested-by: Varun Colbert <vcolbert@nvidia.com>
* gpu: nvgpu: Base channel watchdog on gp_getTerje Bergstrom2017-01-30
| | | | | | | | | | | | | | | | | | Instead of checking if a job is complete, only check that channel is making progress by checking its gp_get is advancing. This will make the watchdog conservative. Previously a whole job had x seconds to complete. Now channel has x seconds to get host to consume each push buffer segment. Bug 1861838 Bug 200273419 Bug 200263100 Change-Id: I70adc1f50301bce8db7dac675771c251c0f11b70 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1294850 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: add support for refcount trackingKonsta Holtta2017-01-11
| | | | | | | | | | | | | | | | | If enabled, track actions (gets and puts) on channel reference counters. Dump the most recent actions to syslog when gk20a_wait_until_counter_is_N gets stuck when closing a channel. GK20A_CHANNEL_REFCOUNT_TRACKING specifies the size of the action history. Default is to disable completely, as this has some runtime overhead. Bug 1826754 Change-Id: I880b0efe8881044d02ae224c243a51cb6c2db8c1 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1262424 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move allocators to common/mm/Alex Waterman2017-01-09
| | | | | | | | | | | | | | | | | | | Move the GPU allocators to common/mm/ since the allocators are common code across all GPUs. Also rename the allocator code to move away from gk20a_ prefixed structs and functions. This caused one issue with the nvgpu_alloc() and nvgpu_free() functions. There was a function for allocating either with kmalloc() or vmalloc() depending on the size of the allocation. Those have now been renamed to nvgpu_kalloc() and nvgpu_kfree(). Bug 1799159 Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1274400 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: acquire mutex for notifier readDeepak Nibade2016-12-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We use &ch->error_notifier_mutex to protect writes and free of error notifier But we currently do not protect reading of notifier in gk20a_fifo_set_ctx_mmu_error() and vgpu_fifo_set_ctx_mmu_error() Add new API gk20a_set_error_notifier_locked() which is same as gk20a_set_error_notifier() but without the locks. In *_fifo_set_ctx_mmu_error() APIs, acquire the mutex explicitly, and then use this new API gk20a_set_error_notifier() will now just call gk20a_set_error_notifier_locked() within a mutex Bug 1824788 Bug 1844312 Change-Id: I1f3831dc63fe1daa761b2e17e4de3c155f505d6f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1273471 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: copy data into channel context headerseshendra Gadagottu2016-12-20
| | | | | | | | | | | | | | If channel context has separate context header then copy required info into context header instead of main context header. JIRA GV11B-21 Change-Id: I5e0bdde132fb83956fd6ac473148ad4de498e830 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1229243 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Allow channel free to be forcedAlex Waterman2016-12-19
| | | | | | | | | | | | | | | | | | Allow forced channel freeing. This is useful when the driver is being cleaned up and the gk20a_wait_until_counter_is_N() could potentially hang. Bug 1816516 Bug 1807277 Change-Id: I711f5f3f6413d0bb30b4857e785ca3b504b494ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1250022 (cherry picked from commit e132d0e5ae77d758680ac708622a4883bbd69ba3) Reviewed-on: http://git-master/r/1261918 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use struct to hold gk20a pointerAlex Waterman2016-12-19
| | | | | | | | | | | | | | | | | | | | | | The private_data field in the file pointer passed to release() for channels originally pointed directly to the referenced channel. The problem with this is that when the driver is killed and the channel mmeory is freed that pointer becomes invalid. The necessity of that channel is to get access to the gk20a struct that owns the channel. This can instead be accomplished by making a new private data struct that has a pointer to the gk20a struct directly instead of requiring the channel to be valid. This lets the release() function work even if the channels are gone (though in such cases the release function doesn't do very much). Change-Id: I5e50bb5b6dd08d38974f8e7b46ba125e9a3f1922 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1246586 (cherry picked from commit 14b7c380c74d2caeb04c47ad3e33332a423a84bb) Reviewed-on: http://git-master/r/1261913 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix signed comparison bugsTerje Bergstrom2016-11-17
| | | | | | | | | | | | Fix small problems related to signed versus unsigned comparisons throughout the driver. Bump up the warning level to prevent such problems from occuring in future. Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1252068 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove IOCTL FREE_OBJ_CTXTerje Bergstrom2016-11-11
| | | | | | | | | | | | | We have never used the IOCTL FREE_OBJ_CTX. Using it leads to context being only partially available, and can lead to use-after-free. Bug 1834225 Change-Id: I9d2b632ab79760f8186d02e0f35861b3a6aae649 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1250004 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Do not post events to unbound channelsTerje Bergstrom2016-11-07
| | | | | | | | | Change-Id: Ia1157198aad248e12e94823eb9f273497c724b2c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1248366 Tested-by: Sachit Kadle <skadle@nvidia.com> Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: make deferred clean-up conditionalSachit Kadle2016-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes the invocation of the deferred job clean-up mechanism conditional. For submissions that require job tracking, deferred clean-up is only required if any of the following conditions are met: 1) Channel's deterministic flag is not set 2) Rail-gating is enabled 3) Channel WDT is enabled 4) Buffer refcounting is enabled 5) Dependency on Sync Framework In case deferred clean-up is not needed, we clean-up a single job tracking resource in the submit path. For deterministic channels, we do not allow deferred clean-up to occur and fail any submits that require it. Bug 1795076 Change-Id: I4021dffe8a71aa58f12db6b58518d3f4021f3313 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1220920 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> (cherry picked from commit b09f7589d5ad3c496e7350f1ed583a4fe2db574a) Reviewed-on: http://git-master/r/1223941 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add support for pre-allocated resourcesSachit Kadle2016-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | Add support for pre-allocation of job tracking resources w/ new (extended) ioctl. Goal is to avoid dynamic memory allocation in the submit path. This patch does the following: 1) Intoduces a new ioctl, NVGPU_IOCTL_CHANNEL_ALLOC_GPFIFO_EX, which enables pre-allocation of tracking resources per job: a) 2x priv_cmd_entry b) 2x gk20a_fence 2) Implements circular ring buffer for job tracking to avoid lock contention between producer (submitter) and consumer (clean-up) Bug 1795076 Change-Id: I6b52e5c575871107ff380f9a5790f440a6969347 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1203300 (cherry picked from commit 9fd270c22b860935dffe244753dabd87454bef39) Reviewed-on: http://git-master/r/1223934 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use inplace allocation in sync frameworkSachit Kadle2016-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change is the first of a series of changes to support the usage of pre-allocated job tracking resources in the submit path. With this change, we still maintain a dynamically-allocated joblist, but make the necessary changes in the channel_sync & fence framework to use in-place allocations. Specifically, we: 1) Update channel sync framework routines to take in pre-allocated priv_cmd_entry(s) & gk20a_fence(s) rather than dynamically allocating themselves 2) Move allocation of priv_cmd_entry(s) & gk20a_fence(s) to gk20a_submit_prepare_syncs 3) Modify fence framework to have seperate allocation and init APIs. We expose allocation as a seperate API, so the client can allocate the object before passing it into the channel sync framework. 4) Fix clean_up logic in channel sync framework Bug 1795076 Change-Id: I96db457683cd207fd029c31c45f548f98055e844 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1206725 (cherry picked from commit 9d196fd10db6c2f934c2a53b1fc0500eb4626624) Reviewed-on: http://git-master/r/1223933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix use-after-free in case of error notifierDeepak Nibade2016-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | A use-after-free scenario is possible where one thread in gk20a_free_error_notifiers() is trying to free the error notifier and another thread in gk20a_set_error_notifier() is still using the error notifier Fix this by introducing mutex error_notifier_mutex for error notifier accesses Take mutex in gk20a_free_error_notifiers() and in gk20a_set_error_notifier() before accessing notifier In gk20a_init_error_notifier(), set the pointer ch->error_notifier_ref inside the mutex and only after notifier is completely initialized Bug 1824788 Change-Id: I47e1ab57d54f391799f5a0999840b663fd34585f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1233988 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove last_submit trackingSachit Kadle2016-09-20
| | | | | | | | | | | | | | | | | | | We previously used to wait on the last_submit fence before disabling a channel. Since this part of the code is no longer exercised, we can remove this tracking. Bug 1795076 Change-Id: I54ba2ebaf48772aa775654c0fb4ab614a7167969 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1206585 Reviewed-by: Automatic_Commit_Validation_User (cherry picked from commit e4e236f2b487b8cfa31f7afd29fad3c97de5f844) Reviewed-on: http://git-master/r/1209166 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move gpfifo submit wait to userspaceAingara Paramakuru2016-09-15
| | | | | | | | | | | | | | | | | Instead of blocking for gpfifo space in the nvgpu driver, return -EAGAIN and allow userspace to decide the blocking policy. Bug 1795076 Change-Id: Ie091caa92aad3f68bc01a3456ad948e76883bc50 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1202591 (cherry picked from commit 8056f422c6a34a4239fc4993c40c2e517c932714) Reviewed-on: http://git-master/r/1203800 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use spinlock for ch timeout lockAingara Paramakuru2016-09-13
| | | | | | | | | | | | | | | | The channel timeout lock guards a very small critical section. Use a spinlock instead of a mutex for performance. Bug 1795076 Change-Id: I94940f3fbe84ed539bcf1bc76ca6ae7a0ef2fe13 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1200803 (cherry picked from commit 4fa9e973da141067be145d9eba2ea74e96869dcd) Reviewed-on: http://git-master/r/1203799 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: Use spin_lock for jobs_lockBharat Nihalani2016-08-31
| | | | | | | | | | | | | | | | | | | | This is done to boost performance of the GPU submit time, which is critical for compute use-cases. Bug 200215465 Bug 1804898 Conflicts: drivers/gpu/nvgpu/gk20a/channel_gk20a.c Change-Id: Ic4884ee4eac910b92b84a47fdc1b2e9f26b2f1f0 Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/1199860 Reviewed-on: http://git-master/r/1209834 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix ctxsw timeout handling for TSGsThomas Fleury2016-08-29
| | | | | | | | | | | | | | | | | | | | | While collecting failing engine data, id type (is_tsg) was not set for ctxsw and save engine states. This could result in some ctxsw timeout interrupts to be ignored (id reported with wrong is_tsg). For TSGs, check if we made some progress on any of the channels before kicking fifo recovery. Bug 200228310 Jira EVLR-597 Change-Id: I231549ae68317919532de0f87effb78ee9c119c6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1204035 (cherry picked from commit 7221d256fd7e9b418f7789b3d81eede8faa16f0b) Reviewed-on: http://git-master/r/1204037 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add nvgpu infra to allow kernel to create privileged CE channelsLakshmanan M2016-07-20
| | | | | | | | | | | | | Added interface to allow kernel to create privileged CE channels for page migration and clearing support between sysmem and videmem. JIRA DNVGPU-53 Change-Id: I3e18d18403809c9e64fa45d40b6c4e3844992506 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1173085 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: process granularity for FECS tracesThomas Fleury2016-07-19
| | | | | | | | | | | | | | | | | | | | | | | When processing FECS traces, a hash table is used to retrieve the 'pid' of the process that created the channel/TSG. Report process identifer (aka tgid in kernel) instead of thread identifier (aka pid) for FECS traces. Bug 1736423 Change-Id: I54cb9d298b9fe3e1cccdd7145604cd01c5758c9d Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1166501 (cherry picked from commit f7fd1f6d7ad0753b787ec20604a08a1f4882fe6f) Reviewed-on: http://git-master/r/1168728 (cherry picked from commit 97a62e5b89352fce576f1bca71b38bf2242ff047) Reviewed-on: http://git-master/r/1177823 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Revamp semaphore supportAlex Waterman2016-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revamp the support the nvgpu driver has for semaphores. The original problem with nvgpu's semaphore support is that it required a SW based wait for every semaphore release. This was because for every fence that gk20a_channel_semaphore_wait_fd() waited on a new semaphore was created. This semaphore would then get released by SW when the fence signaled. This meant that for every release there was necessarily a sync_fence_wait_async() call which could block. The latency of this SW wait was enough to cause massive degredation in performance. To fix this a fast path was implemented. When a fence is passed to gk20a_channel_semaphore_wait_fd() that is backed by a GPU semaphore a semaphore acquire is directly used to block the GPU. No longer is a sync_fence_wait_async() performed nor is there an extra semaphore created. To implement this fast path the semaphore memory had to be shared between channels. Previously since a new semaphore was created every time through gk20a_channel_semaphore_wait_fd() what address space a semaphore was mapped into was irrelevant. However, when using the fast path a sempahore may be released on one address space but acquired in another. Sharing the semaphore memory was done by making a fixed GPU mapping in all channels. This mapping points to the semaphore memory (the so called semaphore sea). This global fixed mapping is read-only to make sure no semaphores can be incremented (i.e released) by a malicious channel. Each channel then gets a RW mapping of it's own semaphore. This way a channel may only acquire other channel's semaphores but may both acquire and release its own semaphore. The gk20a fence code was updated to allow introspection of the GPU backed fences. This allows detection of when the fast path can be taken. If the fast path cannot be used (for example when a fence is sync-pt backed) the original slow path is still present. This gets used when the GPU needs to wait on an event from something which only understands how to use sync-pts. Bug 1732449 JIRA DNVGPU-12 Change-Id: Ic0fea74994da5819a771deac726bb0d47a33c2de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1133792 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add interface for privileged channel allocationLakshmanan M2016-06-23
| | | | | | | | | | | | | Added interface for privileged channel allocation to excute the privileged method (ex. CE phys mode transfer). JIRA DNVGPU-53 Change-Id: I07f9181720b14345cf5890919c2818dfcf505d86 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1169315 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: export gk20a_free_priv_cmdbufAlex Waterman2016-06-14
| | | | | | | | | | | | | | | Export gk20a_free_priv_cmdbuf() so that the channel_sync_gk20a.c code can call this function. This is necessary for error paths in the semaphore wait/incr functions. Bug 1732449 JIRA DNVGPU-12 Change-Id: Id2ea13e5553d50475ee1bbf94781e18590321fdf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1162686 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add uapi support for non-graphics enginesLakshmanan M2016-06-13
| | | | | | | | | | | | | | | | | | | Extend the existing NVGPU_GPU_IOCTL_OPEN_CHANNEL interface to allow opening channels for other than the primary (i.e., the graphics) runlists. This is required to push work to dGPU engines that have their own runlists, such as the asynchronous copy engines and the multimedia engines. Minor change - Added active_engines_list allocation and assignment for fifo_vgpu back end. JIRA DNVGPU-25 Change-Id: I3ed377e2c9a2b4dd72e8256463510a62c64e7a8f Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1161541 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix event id pollingDeepak Nibade2016-06-10
| | | | | | | | | | | | | | | | | | | In gk20a_event_id_poll(), we always set the mask value and return it. This causes poll() from UMD to be always successful irrespective of event is really generated or not Fix this by adding a flag event_posted for each event Set this flag while posting the event In gk20a_event_id_poll(), set the mask value only if this flag is set. If flag is set, set mask and clear the flag Bug 200089620 Change-Id: If14236547c611fe4bfa1410ff5b69c9fa02d43bb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1160253 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix TSG abort sequenceDeepak Nibade2016-06-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In gk20a_fifo_abort_tsg(), we loop through channels of TSG and call gk20a_channel_abort() for each channel This is incorrect since we disable and preempt each channel separately, whereas we should disable all channels at once and use TSG specific API to preempt TSG Fix this with below sequence : - gk20a_disable_tsg() to disable all channels - preempt tsg if required - for each channel in TSG - set has_timedout flag - call gk20a_channel_abort_clean_up() to clean up channel state Also, separate out common gk20a_channel_abort_clean_up() API which can be called from both channel and TSG abort routines In gk20a_channel_abort(), call gk20a_fifo_abort_tsg() if the channel is part of TSG Add new argument "preempt" to gk20a_fifo_abort_tsg() and preempt TSG if flag is set Bug 200205041 Change-Id: I4eff5394d26fbb53996f2d30b35140b75450f338 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1157190 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: refactor gk20a_mem_{wr,rd} for vidmemKonsta Holtta2016-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support vidmem, pass g and mem_desc to the buffer memory accessor functions. This allows the functions to select the memory access method based on the buffer aperture instead of using the cpu pointer directly (like until now). The selection and aperture support will be in another patch; this patch only refactors these accessors, but keeps the underlying functionality as-is. gk20a_mem_{rd,wr}32() work as previously; add also gk20a_mem_{rd,wr}() for byte-indexed accesses, gk20a_mem_{rd,wr}_n() for memcpy()-like functionality, and gk20a_memset() for filling buffers with a constant. The 8 and 16 bit accessor functions are removed. vmap()/vunmap() pairs are abstracted to gk20a_mem_{begin,end}() to support other types of mappings or conditions where mapping the buffer is unnecessary or different. Several function arguments that would access these buffers are also changed to take a mem_desc instead of a plain cpu pointer. Some relevant occasions are changed to use the accessor functions instead of cpu pointers without them (e.g., memcpying to and from), but the majority of direct accesses will be adjusted later, when the buffers are moved to support vidmem. JIRA DNVGPU-23 Change-Id: I3dd22e14290c4ab742d42e2dd327ebeb5cd3f25a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1121143 Reviewed-by: Ken Adams <kadams@nvidia.com> Tested-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Allocate channel table with vmallocTerje Bergstrom2016-04-28
| | | | | | | | | | | | | | | | | Channel table can be bigger than one page, so allocate it with vmalloc. Also add a free for tsg table, which did not exist before, and remove per-channel remove_channel callback which was never used. JIRA DNVGPU-50 Change-Id: I3ee84b65d94881df52bf0618bf4c5f2e85758223 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1129244 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Ken Adams <kadams@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: remove submit lockDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | Remove submit lock since we have moved to use more fine-grained locks Remove API check_gp_put() since we cannot call it in submit path due to latencies and we cannot call it in gk20a_channel_clean_up_jobs() anymore since it will fail there without the lock Bug 200187553 Change-Id: I05b9fa95c9009000e13232d8fa567336eeee11c6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120411 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add lock for fencesDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | All pre/post fence accesses in last_submit are currently protected by submit lock In order to remove the submit lock, move all fence accesses under own lock i.e. fence_lock Bug 200187553 Change-Id: I0132d1933dc92db8c5ed8c9311e49a030aa2d38c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120409 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support for hwpm context switchingPeter Daifuku2016-04-07
| | | | | | | | | | | | Add support for hwpm context switching Bug 1648200 Change-Id: I482899bf165cd2ef24bb8617be16df01218e462f Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1120450 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: post BPT_INT/PAUSE and BLOCKING_SYNC eventsDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | Post EVENT_ID_BPT_INT when bpt.int is pending Post EVENT_ID_BPT_PAUSE when bpt.pause is pending Post EVENT_ID_BLOCKING_SYNC whenever there is non-stalling semaphore interrupt indicating work completion from GR/CE2 engine Bug 200089620 Change-Id: I91b7bf48f8585f0d318298fc0c4a66d42055f0a7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1112274 (cherry picked from commit d2b744b1f9acac56435cd7e7ab9a7a845579ef24) Reviewed-on: http://git-master/r/1120321 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: APIs to post event id eventsDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | | Add below channel and TSG APIs to post events on event_id interface gk20a_channel_event_id_post_event() gk20a_tsg_event_id_post_event() Bug 200089620 Change-Id: I0cfadc9ffdb880b2410f97758fad47905c620db1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1112267 (cherry picked from commit 9f50d7da4500af4dbf4dabe7916eda6fc220f4fb) Reviewed-on: http://git-master/r/1120320 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add TSG support to channel event idDeepak Nibade2016-04-07
| | | | | | | | | | | | | | | | | | | | | | | | Add NVGPU_IOCTL_TSG_EVENT_ID_CTRL API for channel event id support to TSGs This API will accept an event_id (like BPT.INT or BPT.PAUSE), a command to enable the event, and return a file descriptor on which we can raise the event (if cmd=enable) Events generated for TSGs will reuse file operations "gk20a_event_id_ops" Bug 200089620 Change-Id: I2f563c6d3a0988eb670caac2d3c7c6795724792c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1030776 (cherry picked from commit 72b61fa266279038f013e582be80c21808e1038d) Reviewed-on: http://git-master/r/1120319 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>