summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/cde_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: in-kernel kickoff profilingDavid Nieto2017-03-07
| | | | | | | | | | | | | | | | | Add a debugfs interface to profile the kickoff ioctl it provides the probability distribution and separates the information between time spent in: the full ioctl, the kickoff function, the amount of time spent in job tracking and the amount of time doing pushbuffer copies JIRA: EVLR-1003 Change-Id: I9888b114c3fbced61b1cf134c79f7a8afce15f56 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1308997 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use common nvgpu mutex/spinlock APIsDeepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using Linux APIs for mutex and spinlocks directly, use new APIs defined in <nvgpu/lock.h> Replace Linux specific mutex/spinlock declaration, init, lock, unlock APIs with new APIs e.g struct mutex is replaced by struct nvgpu_mutex and mutex_lock() is replaced by nvgpu_mutex_acquire() And also include <nvgpu/lock.h> instead of including <linux/mutex.h> and <linux/spinlock.h> Add explicit nvgpu/lock.h includes to below files to fix complilation failures. gk20a/platform_gk20a.h include/nvgpu/allocator.h Jira NVGPU-13 Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1293187 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Organize semaphore_gk20a.[ch]Alex Waterman2017-02-13
| | | | | | | | | | | | | | | | | | | | | | | Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore code is common to all chips. Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu and rename it to semaphore.h. Also update all places where the header is inluced to use the new path. This revealed an odd location for the enum gk20a_mem_rw_flag. This should be in the mm headers. As a result many places that did not need anything semaphore related had to include the semaphore header file. Fixing this oddity allowed the semaphore include to be removed from many C files that did not need it. Bug 1799159 Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284627 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Organize nvgpu_common.[ch]Alex Waterman2017-02-13
| | | | | | | | | | | | | | | | Move nvgpu_common.c to drivers/gpu/nvgpu/common since it is a common C file to all drivers. Similarly move nvgpu_common.h to drivers/gpu/nvgpu/include/nvgpu since this follows the new include guidelines. Bug 1799159 Change-Id: I00ebed289973b27704c2cff073526e36505bf699 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284612 Reviewed-by: Varun Colbert <vcolbert@nvidia.com> Tested-by: Varun Colbert <vcolbert@nvidia.com>
* gpu: nvgpu: Simplify ref-counting on VMsAlex Waterman2017-02-07
| | | | | | | | | | | | | | | | | | | | Simplify ref-counting on VMs: take a ref when a VM is bound to a channel and drop a ref when a channel is freed. Previously ref-counts were scattered over the driver. Also the CE and CDE code would bind channels with custom rolled code. This was because the gk20a_vm_bind_channel() function took an as_share as the VM argument (the VM was then inferred from that as_share). However, it is trivial to abtract that bit out and allow a central bind channel function that just takes a VM and a channel. Bug 1846718 Change-Id: I156aab259f6c7a2fa338408c6c4a3a464cd44a0c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1261886 Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use timer API in gk20a codeAlex Waterman2017-01-18
| | | | | | | | | | | | | | | | | Use the timers API in the gk20a code instead of Linux specific API calls. This also changes the behavior of several functions to wait for the full timeout for each operation that can timeout. Previously the timeout was shared across each operation. Bug 1799159 Change-Id: I2bbed54630667b2b879b56a63a853266afc1e5d8 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1273826 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Start re-organizing the HW headersAlex Waterman2017-01-11
| | | | | | | | | | | | | | | | | | | | | | | Reorganize the HW headers of gk20a. The headers are moved to a new directory: include/nvgpu/hw/gk20a And from the code are included like so: #include <nvgpu/hw/gk20a/hw_pwr_gk20a.h> This is the first step in reorganizing all of the HW headers for gm20b, gm206, etc. This is part of a larger effort to re-structure and make the driver more readable and scalable. Bug 1799159 Change-Id: Ic151155cbc2e6f75009f2d9d597b364a1bed2c4c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1244790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix signed comparison bugsTerje Bergstrom2016-11-17
| | | | | | | | | | | | Fix small problems related to signed versus unsigned comparisons throughout the driver. Bump up the warning level to prevent such problems from occuring in future. Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1252068 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Return error code on zero IOVATerje Bergstrom2016-11-11
| | | | | | | | | | | | | When buffer's IOVA is zero, treat that as error condition instead of ignoring and continuing. Change-Id: I2ede9921945645f526b0600f61f7e5ed19af6d73 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1249963 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com>
* gpu: nvgpu: Prevent integer overflow of GPU configTerje Bergstrom2016-11-11
| | | | | | | | | | | | | | In CDE GPU CONFIGURATION the result is computed using 32-bit arithmetic and returned as 64-bit unsigned integer. Cast intermediate result to u64 to prevent unintentional overflow. Change-Id: Iebe53e2b17c1aaa498245a52962c3dbad7ce893e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1249962 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com>
* gpu: nvgpu: add support for pre-allocated resourcesSachit Kadle2016-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | Add support for pre-allocation of job tracking resources w/ new (extended) ioctl. Goal is to avoid dynamic memory allocation in the submit path. This patch does the following: 1) Intoduces a new ioctl, NVGPU_IOCTL_CHANNEL_ALLOC_GPFIFO_EX, which enables pre-allocation of tracking resources per job: a) 2x priv_cmd_entry b) 2x gk20a_fence 2) Implements circular ring buffer for job tracking to avoid lock contention between producer (submitter) and consumer (clean-up) Bug 1795076 Change-Id: I6b52e5c575871107ff380f9a5790f440a6969347 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1203300 (cherry picked from commit 9fd270c22b860935dffe244753dabd87454bef39) Reviewed-on: http://git-master/r/1223934 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Suppress error msg from VBIOS overlayTerje Bergstrom2016-10-09
| | | | | | | | | | | | | | | | | | | | Suppress error message when nvgpu tries to load VBIOS overlay, but one is not found. This situation is normal. This is done by moving gk20a_request_firmware() to be nvgpu generic function nvgpu_request_firmware(), and adding a NO_WARN flag to it. Introduce also a NO_SOC flag to suppress attempt to load firmware from SoC specific directory in addition to the chip specific directory. Use it for dGPU firmware files. Bug 200236777 Change-Id: I0294d3308f029a6a6d3c2effa579d5f69a91e418 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1223840 (cherry picked from commit cca44c3f010f15918cdd2259c15170ba1917828a) Reviewed-on: http://git-master/r/1233353 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: unify nvgpu and pci probeDeepak Nibade2016-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have completely different versions of probe for nvgpu and pci device Extract out common steps into nvgpu_probe() function and separate it out in new file nvgpu_common.c Divide task of nvgpu_probe() into further smaller functions Do platform specific things (like irq handling, memresource management, power management) only in individual probes and then call nvgpu_probe() to complete the common initialization Move all debugfs initialization to common gk20a_debug_init() This also helps to bringup all debug nodes to pci device Pass debugfs_symlink name as a parameter to gk20a_debug_init() This allows us to set separate debugfs symlink for nvgpu and pci device In case of railgating, cde and ce debugfs, check if platform supports them or not Copy vidmem_is_vidmem from platform to mm structure and set it to true for pci device Return from gk20a_scale_init() if we don't have either of governor or qos_notifier Fix gk20a_alloc_debugfs_init() and gk20a_secure_page_alloc() to receive device pointer instead of platform_device Export gk20a_railgating_debugfs_init() so that we can call it from gk20a_debug_init() Jira DNVGPU-56 Jira DNVGPU-58 Change-Id: I3cc048082b0a1e57415a9fb8bfb9eec0f0a280cd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1204207 (cherry picked from commit add6bb0a3d5bd98131bbe6f62d4358d4d722b0fe) Reviewed-on: http://git-master/r/1204462 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: Use spin_lock for jobs_lockBharat Nihalani2016-08-31
| | | | | | | | | | | | | | | | | | | | This is done to boost performance of the GPU submit time, which is critical for compute use-cases. Bug 200215465 Bug 1804898 Conflicts: drivers/gpu/nvgpu/gk20a/channel_gk20a.c Change-Id: Ic4884ee4eac910b92b84a47fdc1b2e9f26b2f1f0 Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-on: http://git-master/r/1199860 Reviewed-on: http://git-master/r/1209834 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add nvgpu infra to allow kernel to create privileged CE channelsLakshmanan M2016-07-20
| | | | | | | | | | | | | Added interface to allow kernel to create privileged CE channels for page migration and clearing support between sysmem and videmem. JIRA DNVGPU-53 Change-Id: I3e18d18403809c9e64fa45d40b6c4e3844992506 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1173085 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: use vidmem by default in gmmu_alloc variantsKonsta Holtta2016-07-08
| | | | | | | | | | | | | | | | | | For devices that have vidmem available, use the vidmem allocator in gk20a_gmmu_alloc{,attr,_map,_map_attr}. For others, use sysmem. Because all of the buffers haven't been tested to work in vidmem yet, rename calls to gk20a_gmmu_alloc{,attr,_map,_map_attr} to have _sys at the end to declare explicitly that vidmem is used. Enabling vidmem for each now is a matter of removing "_sys" from the function call. Jira DNVGPU-18 Change-Id: Ibe42f67eff2c2b68c36582e978ace419dc815dc5 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1176805 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support in-kernel vidmem mappingsKonsta Holtta2016-07-06
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: I46cf87e27229123016727339b9349d5e2c835b3e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169308 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use device instead of platform_deviceTerje Bergstrom2016-04-08
| | | | | | | | | Use struct device instead of struct platform_device wherever possible. This allows adding other bus types later. Change-Id: I1657287a68d85a542cdbdd8a00d1902c3d6e00ed Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120466
* gpu: nvgpu: create sync_fence only if neededDeepak Nibade2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we create sync_fence (from nvhost_sync_create_fence()) for every submit But not all submits request for a sync_fence. Also, nvhost_sync_create_fence() API takes about 1/3rd of the total submit path. Hence to optimize, we can allocate sync_fence only when user explicitly asks for it using (NVGPU_SUBMIT_GPFIFO_FLAGS_FENCE_GET && NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE) Also, in CDE path from gk20a_prepare_compressible_read(), we reuse existing fence stored in "state" and that can result into not returning sync_fence_fd when user asked for it Hence, force allocation of sync_fence when job submission comes from CDE path Bug 200141116 Change-Id: Ia921701bf0e2432d6b8a5e8b7d91160e7f52db1e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/812845 (cherry picked from commit 5fd47015eeed00352cc8473eff969a66c94fee98) Reviewed-on: http://git-master/r/837662 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: remove temporary gpfifo allocation in submit pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In GPU job submit path gk20a_ioctl_channel_submit_gpfifo(), we currently allocate a temporary gpfifo, copy user space gpfifo content into this temporary buffer, and then copy temp buffer content into channel's gpfifo. Allocation/copy/free of temporary buffer adds additional overhead Rewrite this sequence such that gk20a_submit_channel_gpfifo() can receive either a pre-filled gpfifo or pointer to user provided args. And then we can direclty copy the user provided gpfifo into the channel's gpfifo Also, if command buffer tracing is enabled, we still need to copy user provided gpfifo into temporaty buffer for reading But that should not cause overhead in real world use case Bug 200141116 Change-Id: I7166c9271da2694059da9853ab8839e98457b941 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/823386 (cherry picked from commit 3e0702db006c262dd8737a567b8e06f7ff005e2c) Reviewed-on: http://git-master/r/835799 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use a separate big vm for cdeKonsta Holtta2015-11-11
| | | | | | | | | | | | | Allocate a separate VM for CDE channels instead of using the system (PMU) vm, and make it much bigger than the PMU's to fit the maximum number of CDE channels there. Bug 1566740 Change-Id: I4f487c40c9ec79cc9ffb880b0ecd3f47eb450336 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/815149 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: fix 4k compressionJussi Rasanen2015-10-06
| | | | | | | | | | | | | | | | | Add CPU dcache flush after populating scatterBuffer so that the GPU will see the buffer contents. Bug 1679453 Change-Id: I564394ed1fcff4d08d753e753bd3243b460d76df Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com> Reviewed-on: http://git-master/r/805197 (cherry picked from commit d6a5513745aa77c84ac5408a62f72f24839ef439) Reviewed-on: http://git-master/r/808246 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-29
| | | | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: I8a8deaf6fb91f4bbceacd491db7eb6f7bca5001b Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/804625 Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add support for CDE scatter buffersJussi Rasanen2015-09-28
| | | | | | | | | | | | | | | | Add support for CDE scatter buffers. When the bus addresses for surfaces are not contiguous as seen by the GPU (e.g., when SMMU is bypassed), CDE swizzling needs additional per-page information. This information is populated in a scatter buffer when required. Bug 1604102 Change-Id: I3384e2cfb5d5f628ed0f21375bdac8e36b77ae4f Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com> Reviewed-on: http://git-master/r/789436 Reviewed-on: http://git-master/r/791243 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: Add CDE bits in FECS header"Terje Bergstrom2015-09-24
| | | | | | | | This reverts commit 882975f7f1b4e050be79b0a047a2daa8b53a9187. Change-Id: I4940fc9f7a837840be1ea8e42d58d603235d88d5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/804616
* gpu: nvgpu: Add CDE bits in FECS headersujeet baranwal2015-09-24
| | | | | | | | | | | | | | In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B aligned, otherwise causes a HW deadlock. Gpu driver makes changes in FECS header which FECS uses to configure the T1 promotions to aligned 128B accesses. Bug 200096226 Change-Id: Ic006b2c7035bbeabe1081aeed968a6c6d11f9995 Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/802327 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Prepare for per-GPU CDE program numbersSami Kiminki2015-08-19
| | | | | | | | | | | | | | | Add gpu_ops for CDE, and add get_program_numbers function pointer for determining horizontal and vertical CDE swizzler programs. This allows different GPUs to have their own specific requirements for choosing the CDE firmware programs. Bug 1604102 Change-Id: Ib37c13abb017c8eb1c32adc8cbc6b5984488222e Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/784899 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix channel close sequenceDeepak Nibade2015-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | In gk20a_cde_remove_ctx(), current sequence is as below - gk20a_channel_close() - gk20a_deinit_cde_img() - gk20a_free_obj_ctx() But gk20a_free_obj_ctx() needs reference to channel and hence below crash is seen : [ 3901.466223] Unable to handle kernel paging request at virtual address 00001624 ... [ 3901.535218] PC is at gk20a_free_obj_ctx+0x14/0xb0 [ 3901.539910] LR is at gk20a_deinit_cde_img+0xd8/0x12c Fix this by closing the channel after gk20a_deinit_cde_img() Bug 1625901 Change-Id: Ic2dc5af933b6d6ef8982c2b9f0caa28df204051f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/770322 GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Implement priv pagesTerje Bergstrom2015-07-03
| | | | | | | | Implement support for privileged pages. Use them for kernel allocated buffers. Change-Id: I720fc441008077b8e2ed218a7a685b8aab2258f0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/761919
* gpu: nvgpu: Initial MAP_BUFFER_BATCH implementationSami Kiminki2015-06-30
| | | | | | | | | | | | | | | | | | | Add batch support for mapping and unmapping. Batching essentially helps transform some per-map/unmap overhead to per-batch overhead, namely gk20a_busy()/gk20a_idle() calls, GPU L2 flushes, and GPU TLB invalidates. Batching with size 64 has been measured to yield >20x speed-up in low-level fixed-address mapping microbenchmarks. Bug 1614735 Bug 1623949 Change-Id: Ie22b9caea5a7c3fc68a968d1b7f8488dfce72085 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/733231 (cherry picked from commit de4a7cfb93e8228a4a0c6a2815755a8df4531c91) Reviewed-on: http://git-master/r/763812 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add per-channel refcountingKonsta Holtta2015-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add reference counting for channels, and wait for reference count to get to 0 in gk20a_channel_free() before actually freeing the channel. Also, change free channel tracking a bit by employing a list of free channels, which simplifies the procedure of finding available channels with reference counting. Each use of a channel must have a reference taken before use or held by the caller. Taking a reference of a wild channel pointer may fail, if the channel is either not opened or in a process of being closed. Also, add safeguards for protecting accidental use of closed channels, specifically, by setting ch->g = NULL in channel free. This will make it obvious if freed channel is attempted to be used. The last user of a channel might be the deferred interrupt handler, so wait for deferred interrupts to be processed twice in the channel free procedure: once for providing last notifications to the channel and once to make sure there are no stale pointers left after referencing to the channel has been denied. Finally, fix some races in channel and TSG force reset IOCTL path, by pausing the channel scheduler in gk20a_fifo_recover_ch() and gk20a_fifo_recover_tsg(), while the affected engines have been identified, the appropriate MMU faults triggered, and the MMU faults handled. In this case, make sure that the MMU fault does not attempt to query the hardware about the failing channel or TSG ids. This should make channel recovery more safe also in the regular (i.e., not in the interrupt handler) context. Bug 1530226 Bug 1597493 Bug 1625901 Bug 200076344 Bug 200071810 Change-Id: Ib274876908e18219c64ea41e50ca443df81d957b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/448463 (cherry picked from commit 3f03aeae64ef2af4829e06f5f63062e8ebd21353) Reviewed-on: http://git-master/r/755147 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Use common allocator for compbit storeTerje Bergstrom2015-04-04
| | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: I7c1662b669ed8c86465254f6001e536141051ee5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/720435
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* Revert "gpu: nvgpu: cache cde compbits buf mappings"Konsta Holtta2015-04-04
| | | | | | | | | | This reverts commit 9968badd26490a9d399f526fc57a9defd161dd6c. The commit accidentally introduced some memory leaks. Change-Id: I00d8d4452a152a8a2fe2d90fb949cdfee0de4c69 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/714288 Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
* gpu: nvgpu: remove support for CDE firmware v0Jussi Rasanen2015-04-04
| | | | | | | | | | | | | | -CDE firmware v0 is not used anymore so we can remove support for it. -Bump the threshold for a large surface warning to 8k. Bug 1566740 Change-Id: Ia0434a04cdd453a10a8de08d259e92e6b9a3e964 Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com> Reviewed-on: http://git-master/r/709452 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cache cde compbits buf mappingsKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | | don't unmap compbits_buf explicitly from system vm early but store it in the dmabuf's private data, and unmap it later when all user mappings to that buffer have been disappeared. Bug 1546619 Change-Id: I333235a0ea74c48503608afac31f5e9f1eb4b99b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/661949 (cherry picked from commit ed2177e25d9e5facfb38786b818330798a14b9bb) Reviewed-on: http://git-master/r/661835 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add a new CDE parameterJussi Rasanen2015-04-04
| | | | | | | | | | | Add TYPE_PARAM_GOBS_PER_COMPTAGLINE_PER_SLICE. Change-Id: I7cbf7b6db6642a61629ba06f7887bd58af3dc28f Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com> Reviewed-on: http://git-master/r/673152 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: Set lockboost size for computeTerje Bergstrom2015-04-04
| | | | | | | | | | | For compute channel on gk20a, set lockboost size to zero. Bug 1573856 Change-Id: I369cebf72241e4017e7d380c82caff6014e42984 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/594843 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: cde: ignore spurious context releasesKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | | Gpu channels may get spurious updates from at least nonstalling semaphore wait interrupts. Protect data structure sanity by ignoring releases on already released (= not in use) cde contexts. Bug 200062826 Change-Id: I5940a7557e902bcfcff1a7e8e4593472d9ac306c Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/666235 (cherry picked from commit 47dc2f41eb8054b099b6eb9a4a7d82c97295d415) Reviewed-on: http://git-master/r/666657 GVS: Gerrit_Virtual_Submit Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: cde: allow duplicate finish signalsKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | Channel update callback for a channel that has no more cde jobs signals that a cde context is free. Spurious channel updates may still happen from at least nonstalling semaphore wait interrupts. Instead of scary WARNs, use only gk20a_dbg_info() for info prints in these harmless situations, and double check that only the first update starts a deleter work for temporary contexts. Change-Id: I68de8f35e2c366206c6efac3ee97025239e8bba2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> (cherry-picked from commit f56a941b4962c5479291cae48e2abca6067e3f13) Reviewed-on: http://git-master/r/660849 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: remove unused obj_idsKonsta Holtta2015-03-18
| | | | | | | | | | | | | obj_id from gk20a_alloc_obj_ctx is not used and calling free_obj_ctx is effectively a no-op, since the corresponding channel is also freed. Bug 200059216 Change-Id: Icbe2cf5dc21d50cb007bf73829705451ada106ac Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/655368 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: fix off-by-1 in buf allocationKonsta Holtta2015-03-18
| | | | | | | | | | Bug 200046882 Change-Id: I515e972f84cb7e1b17eef42ade6a4eaf0f8d71f8 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/559332 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: wait for ctx deletion before getKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | | | | Wait for possible temp context deletion to finish properly before passing contexts around later, to prevent situations where the context deleter scheduling would have been completed, but running it would not, and a new one could have been scheduled again. When finished, schedule the deleter before freeing the context back to use to prevent races. Warn in impossible situations when these double deletions would happen. Bug 200054186 Bug 200052943 Change-Id: I23ca0d1081eea77d0e453b9038adc914909b5f48 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/603439 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: combine init and convert passesKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | CDE context needs to be initialized in the first run using a separate initialization gpfifo before the actual conversion. To prevent a race condition, include both of them in a single gpfifo whenever the initialization is performed. Bug 200052943 Change-Id: I7eb09a906c0374825df71eba969e4596b94e5ff2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/602888 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: add trace events for ctx allocsKonsta Holtta2015-03-18
| | | | | | | | | | | | Trace cde context allocation and deallocation with ftrace. Bug 200052943 Change-Id: Ieeb625166662971fb3eb3fb29c986fdb6809c10b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/602886 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: warn on double finish and releaseKonsta Holtta2015-03-18
| | | | | | | | | | | | | | Add WARN to conditions that should never happen, to help debugging any context issues. Bug 200052943 Change-Id: Ibe2a9507f3a62bb7b2e263ff3ff21a24a092a971 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/602885 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: restrict context countKonsta Holtta2015-03-18
| | | | | | | | | | | | | Add an upper limit for cde contexts, and wait for a while if a new context is queried and the limit has been exceeded. This happens only under very high load. If the timeout is exceeded, report -EAGAIN. Change-Id: I1fa47ad6cddf620eae00cea16ecea36cf4151cab Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/601719 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: report use counts to debugfsKonsta Holtta2015-03-18
| | | | | | | | | | | Create debugfs nodes for ctx_count, ctx_usecount and ctx_cont_top. Change-Id: I1360853b2650d37a96c8adf76368d48d9b457909 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/602860 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: do not rearm deleter on failureKonsta Holtta2015-03-18
| | | | | | | | | | | | | | | | | | | | Rescheduling the temp context deleter when it is not immediately possible is not necessary, and complicates things. Don't do it. The context would anyway be used later when its time comes in the free list, and the deletion would then be retried. This simplifies canceling the works when shutting down or going into suspend, since re-canceling the possibly rescheduled work is not needed. Releasing the app mutex is still necessary when deleting the whole cde. Bug 200052943 Change-Id: I06afe1766097a78d7bcb93f3140855799ac903ca Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/601035 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: cancel ctx deleter when using itKonsta Holtta2015-03-18
| | | | | | | | | | | | Cancel the temporary context deleter work when acquiring a channel for use, to prevent re-scheduling when the same context would be quickly re-used and finished twice or more in a row before deletion. Change-Id: Iadd8230d9462adc451e506152a24c50a920a59e3 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/600273 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>