summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu
Commit message (Collapse)AuthorAge
* nvgpu: gk20a: Fix Sparse warningMinal Ugale2016-07-19
| | | | | | | | | | | | | | | | Fixed the following warning: - gk20a.c:147:5: warning: symbol 'gk20a_railgating_debugfs_init' was not declared ? Bug 200067946 Bug 200088648 Change-Id: Ic7b1a24cee5066249e7d25db87a3e1569a608e6c Signed-off-by: Minal Ugale <mugale@nvidia.com> Reviewed-on: http://git-master/r/1183272 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Implement a bitmap allocatorAlex Waterman2016-07-19
| | | | | | | | | | | | | | | | | | | | | | Implement a bitmap allocator for GPU use. This allocator is useful for managing memory (or resource) regions where the buddy allocator is not ideal. Some instances are small regions or where the resource management must not make calls to the kernel's memory allocation routines (anything that ultimately calls alloc_page()). The code path where this avoidance of alloc_page() is most required is the gpfifo submit path. In order to keep this routine fast and have predicable time constraints no alloc_page() calls is necessary. The buddy allocator does not work for this since every time a buddy is allocated there is the possibility that a pair (or more) buddy structs have to be made. These allocs could perhaps require a call into alloc_page() if there is not enouch space in the kmem_cache slab for the buddy structs. Change-Id: Ia46fce62d4bdafcebbc153b21b515cb51641d241 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1176446 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: smarter debugging for allocatorsAlex Waterman2016-07-19
| | | | | | | | | | | Allow individual allocacators to be debugged without enabling debugging on all allocators. The ALLOCATOR_DEBUG define will still work as expected and enable debugging for all allocators that see this define. Change-Id: I0d59fa29affeaac15381e65d4128e7bef2f15bd5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1178689 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Change the allocator flag naming schemeAlex Waterman2016-07-19
| | | | | | | | | Move to a more generic name of GPU_ALLOC_*. Change-Id: Icbbd366847a9d74f83f578e4d9ea917a6e8ea3e2 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1176445 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Move buddy allocator to new fileAlex Waterman2016-07-19
| | | | | | | | | | | | | Move the buddy allocator implementation to a new file to make the code more organized. Also, as part of this, commonize some macros and functions which will be used by future allocator implementations. Bug 1781897 Change-Id: I1611534d5d872bf3b4677f7a1cc024a94b1c437e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1172116 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Support multiple types of allocatorsAlex Waterman2016-07-19
| | | | | | | | | | | | | | | | | | | | | | Support multiple types of allocation backends. Currently there is only one allocator implementation available: a buddy allocator. Buddy allocators have certain limitations though. For one the allocator requires metadata to be allocated from the kernel's system memory. This causes a given buddy allocation to potentially sleep on a kmalloc() call. This patch has been created so that a new backend can be created which will avoid any dynamic system memory management routines from being called. Bug 1781897 Change-Id: I98d6c8402c049942f13fee69c6901a166f177f65 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1172115 GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: process granularity for FECS tracesThomas Fleury2016-07-19
| | | | | | | | | | | | | | | | | | | | | | | When processing FECS traces, a hash table is used to retrieve the 'pid' of the process that created the channel/TSG. Report process identifer (aka tgid in kernel) instead of thread identifier (aka pid) for FECS traces. Bug 1736423 Change-Id: I54cb9d298b9fe3e1cccdd7145604cd01c5758c9d Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1166501 (cherry picked from commit f7fd1f6d7ad0753b787ec20604a08a1f4882fe6f) Reviewed-on: http://git-master/r/1168728 (cherry picked from commit 97a62e5b89352fce576f1bca71b38bf2242ff047) Reviewed-on: http://git-master/r/1177823 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: add ref counting for GPU sched ctrlThomas Fleury2016-07-19
| | | | | | | | | | | | | | | Jira VFND-1968 Change-Id: Id84c5732e312e44db3d412df5c21e429227dd7fa Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1171286 (cherry picked from commit 13a3a4355914635ed175708affef17dc8ef0b133) Reviewed-on: http://git-master/r/1177824 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: add sched control APIThomas Fleury2016-07-19
| | | | | | | | | | | | | | | | | | | | | | | Added a dedicated device node to allow an app manager to control TSG scheduling parameters: - Get list of TSGs - Get list of recent TSGs - Get list of TSGs per pid - Get TSG current scheduling parameters - Set TSG timeslice - Set TSG runlist interleave Jira VFND-1586 Change-Id: I014c9d1534bce0eaea6c25ad114cf0cff317af79 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1160384 (cherry picked from commit 75ca739517cc7f7f76714b5f6a1a57c39b8cb38e) Reviewed-on: http://git-master/r/1167021 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: fix gk20a_mm_smmu_vaddr_translate()Richard Zhao2016-07-18
| | | | | | | | | | | | | | | - remove checking of has_physical_mode - check whether get_physical_addr_bits is null JIRA VFND-1965 Change-Id: If19b297dc853b9e0b5879c5b2e0a350b5d9b279a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1175738 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: Debugfs support for Railgating stats.Deepak Goyal2016-07-18
| | | | | | | | | | | | | | | | | | This patch calculates: -Total time spent by GPU with rails gated. -Total time spent by GPU with rails ungated. -Total Railgating Cycles. and dumps this information in debugfs file. This feature requires CONFIG_DEBUG_FS set to true. Bug 200195100 Change-Id: I1379f11237ce4900076947e18524caaa3304c7cb Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: http://git-master/r/1178308 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: use nvgpu_free for gpfifo pipe cleanup on errorKonsta Holtta2016-07-18
| | | | | | | | | | | | | | | Replace kfree with nvgpu_free in error handling path in gk20a_alloc_channel_gpfifo where the gpfifo pipe buffer is being allocated, because it's allocated with nvgpu_alloc. Jira DNVGPU-21 Change-Id: I73100394b67da2ab064e4e9df6b430d818abce56 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1182401 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: gk20a: Add fuse.h headerShreshtha SAHU2016-07-17
| | | | | | | | | | | | Add "soc/tegra/fuse.h" to include declaration of tegra_get_chip_id() for kernel version 4.4 and higher as upstream fuse header is not available in older kernel versions. Change-Id: Ib83fc6965bc46bb729eab1cc583b9c963f501738 Signed-off-by: Shreshtha SAHU <ssahu@nvidia.com> Reviewed-on: http://git-master/r/1180686 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: fix sparse warningSachit Kadle2016-07-16
| | | | | | | | | | | | | | | | Add static qualifier to nvgpu_pci_pm_init, as it is only used within the current file. Bug 200067946 Bug 200088648 Change-Id: Ifb7d3ec174a9f8eea0ac53421c953761886f48c6 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1181867 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com> Tested-by: Sachin Nikam <snikam@nvidia.com>
* kernel: nvgpu: fix Coverity defectGeorge Bauernschmidt2016-07-14
| | | | | | | | | | | | | | | Bug 1781383 CID 37989 - Changed for_each_set_bit addr parameter to unsigned long. Change-Id: I3f3f314a1aea9d376d45699f870a9e372854f069 Signed-off-by: George Bauernschmidt <georgeb@nvidia.com> Reviewed-on: http://git-master/r/1177417 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: handle map/unmap for vidmem gmmu pagesKonsta Holtta2016-07-14
| | | | | | | | | | | | | | | If page tables are allocated from vidmem, cpu cache flushing doesn't make sense, so skip it. Unify also map/unmap actions if the pages are not mapped. Jira DNVGPU-20 Change-Id: I36b22749aab99a7bae26c869075f8073eab0f860 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1178830 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: vgpu: add pm qos supportRichard Zhao2016-07-14
| | | | | | | | | | | | | | | | | Send cmd to RM server to change clk rate when PM_QOS_GPU_FREQ_BOUNDS max changes. Bug 200206160 Change-Id: I7f19e5f711426517baf8e7f934bf41972012644b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1172792 (cherry picked from commit 973c258fd85449c3862df2498362e358fd3682c9) Reviewed-on: http://git-master/r/1180892 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: enable runtime pm for pciSachit Kadle2016-07-14
| | | | | | | | | | | | | | | | | | | | | Enable runtime power management ops for PCIe devices, and move gk20a_pm_finalize_poweron call into the resume routine. This change only implements suspend as a stub, as suspend/resume has not yet been verified for dGPU Bug 1785512 Bug 200187507 Change-Id: I076bafc03c6b4ba10dce874804ed3fd0b9c7b0d8 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1179860 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: set has_physical_mode only if running on native linuxRichard Zhao2016-07-13
| | | | | | | | | | | | | | | Set has_physical_mode if running on native linux for better performance. Set it false if running on native gpu but on linux-hv, as the driver can not get real physical address. JIRA VFND-1965 Change-Id: I6e0322e64ad14d35d179a33e979157b53d77005a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1175739 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: zero vidmem pages on allocationKonsta Holtta2016-07-12
| | | | | | | | | | | | | | The allocator doesn't give us empty pages, so make sure that they're full of zeros, just like the sysmem alloc path does. Jira DNVGPU-16 Change-Id: I0ff8a0718829b13973535ba1111a8a11b91be04d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1178829 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Make the sync_pt_value_str more informativeAlex Waterman2016-07-08
| | | | | | | | | | | | | | | Add semamphore specific information to the sync_pt_value str when the underlying primitive for the sync_pt is a semamphore. This is useful for debugging purposes. Bug 1732449 JIRA DNVGPU-12 Change-Id: I0dd7d921e39e3245ed1778aad77e20297b55df61 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1162689 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: use vidmem by default in gmmu_alloc variantsKonsta Holtta2016-07-08
| | | | | | | | | | | | | | | | | | For devices that have vidmem available, use the vidmem allocator in gk20a_gmmu_alloc{,attr,_map,_map_attr}. For others, use sysmem. Because all of the buffers haven't been tested to work in vidmem yet, rename calls to gk20a_gmmu_alloc{,attr,_map,_map_attr} to have _sys at the end to declare explicitly that vidmem is used. Enabling vidmem for each now is a matter of removing "_sys" from the function call. Jira DNVGPU-18 Change-Id: Ibe42f67eff2c2b68c36582e978ace419dc815dc5 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1176805 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: simplify power managementDeepak Nibade2016-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currenlty initialize both runtime PM and pm_domains frameworks and use pm_domain to control runtime power management of NvGPU But since GPU has a separate rail, using pm_domain is not strictly required Hence remove pm_domain support and use runtime PM only for all the power management This also simplifies the code a lot Initialization in gk20a_pm_init() - if railgate_delay is set, set autosuspend delay of runtime PM - try enabling runtime PM - if runtime PM is now enabled, keep GPU railgated - if runtime PM is not enabled, keep GPU unrailgated - if can_railgate = false, disable runtime PM and keep GPU unrailgated Set gk20a_pm_ops with below callbacks for runtime PM static const struct dev_pm_ops gk20a_pm_ops = { .runtime_resume = gk20a_pm_runtime_resume, .runtime_suspend = gk20a_pm_runtime_suspend, .resume = gk20a_pm_resume, .suspend = gk20a_pm_suspend, } Move gk20a_busy() to use runtime checks of pm_runtime_enabled() instead of using compile time checks on CONFIG_PM Clean up some pm_domain related code Remove use of gk20a_pm_enable/disable_clk() since this should be already done in platform specific unrailgate()/ railgate() APIs Fix "railgate_delay" and "railgate_enable" sysfs to use runtime PM calls For VGPU, disable runtime PM during vgpu_pm_init() With this, we will initialize vgpu with vgpu_pm_finalize_poweron() upon first call to gk20a_busy() Jira DNVGPU-57 Change-Id: I6013e33ae9bd28f35c25271af1239942a4fa0919 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1163216 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add gk20a_busy() for debug operationsDeepak Nibade2016-07-07
| | | | | | | | | | | | | Add missing gk20a_busy()/idle() for debug operation IOCTLs Bug 1765446 Change-Id: Id238646a116ea573f64e3f92def40e52aadd5a11 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1173719 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove unused code in simulation pathsTerje Bergstrom2016-07-07
| | | | | | | | | | | | | Remove code that was compiled out or hard coded not to be ever invoked. Coverity ID 24463 Change-Id: Ia4a68bbe43eaebd9f3de1df1318095c014b9e9d0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1172046 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: Add check for chip name sizeTerje Bergstrom2016-07-07
| | | | | | | | | | | When copying chip name to GPU characteristics limit the size of copy to the size of target name field. Coverity ID 33613 Change-Id: Ia538d47b9d5e1dd122d57ccd8bfbb3902612874c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1172007
* gpu: nvgpu: vgpu: dbg_set_powergate supportPeter Daifuku2016-07-06
| | | | | | | | | | | | | | | | | | Add support for dbg_set_powergate when virtualized Jira VFND-1905 Change-Id: I0d81c8863b3eda4ae4fee42e5a95d2fc9d78b174 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1162048 (cherry picked from commit 0dfc55f390a10e21ae13e14dd2f16e89a3bddfa7) Reviewed-on: http://git-master/r/1167182 (cherry picked from commit 4e34a1844558d93da5ad208532ec28aeda228f95) Reviewed-on: http://git-master/r/1174701 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: Make gk20a_init_sema_pool() staticAlex Waterman2016-07-06
| | | | | | | | | | | | | | | This function is only used in mm_gk20a.c and as a result should be static (fixes a sparse issue). Bug 200088648 Change-Id: I6787b4ebc5925a503d8ef2fed90c3d7cd5027589 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1176309 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Richard Zhao <rizhao@nvidia.com>
* gpu: nvgpu: support in-kernel vidmem mappingsKonsta Holtta2016-07-06
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: I46cf87e27229123016727339b9349d5e2c835b3e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169308 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* Revert "gpu: nvgpu: take platform power ref at power on"Alex Waterman2016-07-05
| | | | | | | | | | | This reverts commit 1e01a49fdc139b8cdf5164b4a6767d22ef4ad1d3. Bug 1784924 Change-Id: I7bd77f34e37395ed5339d018897d8db91eb5ee0e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1175903 GVS: Gerrit_Virtual_Submit
* gpu: ngpu: add support for vidmem in page tablesKonsta Holtta2016-07-05
| | | | | | | | | | | | | | Modify page table updates to take an aperture flag (up until gk20a_locked_gmmu_map()), don't hard-assume sysmem and propagate it to hardware. Jira DNVGPU-76 Change-Id: Ifcb22900c96db993068edd110e09368f72b06f69 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169307 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: initial support for vidmem aperturesKonsta Holtta2016-07-05
| | | | | | | | | | | | | | add gk20a_aperture_mask() for memory target selection now that buffers can actually be allocated from vidmem, and use it in all cases that have a mem_desc available. Jira DNVGPU-76 Change-Id: I4353cdc6e1e79488f0875581cfaf2a5cfb8c976a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169306 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Initialize TSG ops for gm206Terje Bergstrom2016-07-04
| | | | | | | | | | | | Bug 200214046 Change-Id: I483e6c5ae484ccae61712884f7b4368291791fcd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1172598 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: kmalloc does not return error codesTerje Bergstrom2016-07-04
| | | | | | | | | kmalloc() returns NULL instead of error code on failure. Do not check if the return value is an error code. Change-Id: I31a46080ab51773a22bebe4cf03a5b0c94467204 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1172052
* gpu: nvgpu: Use 64-bit math for cacheline offsetTerje Bergstrom2016-07-04
| | | | | | | | | | Cast cacheline_start to u64 to use 64-bit maths for cacheline offset. Coverity ID 24250 Change-Id: Ic10c92ebb737bd39486a83e4de53cc1191193667 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1172053
* gpu: nvgpu: vgpu: remove sysfs creationShawn Joo2016-07-04
| | | | | | | | | | | | | | | | gk20a sysfs is not currently applicable for vgpu. Remove it for now. Bug 200137760 Change-Id: Ia79eb5d0958856958fe74ce6eb6ccdd521a6f8f7 Signed-off-by: Shawn Joo <sjoo@nvidia.com> Reviewed-on: http://git-master/r/1164600 (cherry picked from commit 2a630b197df914244f5be0608598e395c3783664) Reviewed-on: http://git-master/r/1172417 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix coverity issueVandana Salve2016-07-04
| | | | | | | | | | | | | | | | Fix coverity defects, Resource leak Coverity id 33601 Bug 1781383 Change-Id: I5eef698d9a2dfccd48199c628a7898351cb74445 Signed-off-by: Vandana Salve <vsalve@nvidia.com> Reviewed-on: http://git-master/r/1173660 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: cancel job clean up before aborting channelDeepak Nibade2016-07-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is possible that when we abort the channel, we have job clean up worker running, which could race with abort and sometimes result in below panic [ 245.483566] Unable to handle kernel paging request at virtual address 800000000 ... [ 245.548991] PC is at gk20a_channel_abort_clean_up+0xb8/0x140 [ 245.554683] LR is at gk20a_channel_abort_clean_up+0xac/0x140 ... [ 247.301860] [<ffffffc000479390>] gk20a_channel_abort_clean_up+0xb8/0x140 [ 247.312853] [<ffffffc0004794d4>] gk20a_channel_abort+0xbc/0xc8 [ 247.322970] [<ffffffc0004794f8>] gk20a_disable_channel+0x18/0x30 [ 247.333267] [<ffffffc000479628>] gk20a_free_channel+0x118/0x584 [ 247.343473] [<ffffffc000479aa0>] gk20a_channel_close+0xc/0x14 [ 247.353479] [<ffffffc000479b80>] gk20a_channel_release+0xd8/0x104 Fix this by cancelling the job clean up worker before aborting the channel Bug 1777281 Change-Id: Ic24c7c03b27cfb5cd164a52efdb1e2813a41a10a Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1174416 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: take platform power ref at power onSachit Kadle2016-06-30
| | | | | | | | | | | | | | | | | | | | | Currently, host1x power refcount may decrement to 0, while GPU is still powered on and we're still servicing IRQs. So to prevent this situation,, take a ref while GPU is being powered on, and decrement it during power off. Since we are always holding one reference while GPU is powered on, we can remove this handling from gk20a_busy/idle(). Bug 200187507 Change-Id: Idabe88754f009f1e8de8dc821d53be3e013dc657 Signed-off-by: Sachit Kadle <skadle@nvidia.com> Reviewed-on: http://git-master/r/1172320 (cherry picked from commit 3e27e6a5820f5c1ad05596553d75e8979b71f1bd) Reviewed-on: http://git-master/r/1172607 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fixing sparse error/warningMahantesh Kumbar2016-06-29
| | | | | | | | | | | | | | - $TOP/kernel/nvgpu/drivers/gpu/nvgpu/gm206/acr_gm206.c:207:5: warning: symbol 'gm206_bootstrap_hs_flcn' was not declared. Should it be static? Bug 200067946 Change-Id: Idc8c464b03eb9a382f54095cd3e04786be19308c Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1172969 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Revamp semaphore supportAlex Waterman2016-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revamp the support the nvgpu driver has for semaphores. The original problem with nvgpu's semaphore support is that it required a SW based wait for every semaphore release. This was because for every fence that gk20a_channel_semaphore_wait_fd() waited on a new semaphore was created. This semaphore would then get released by SW when the fence signaled. This meant that for every release there was necessarily a sync_fence_wait_async() call which could block. The latency of this SW wait was enough to cause massive degredation in performance. To fix this a fast path was implemented. When a fence is passed to gk20a_channel_semaphore_wait_fd() that is backed by a GPU semaphore a semaphore acquire is directly used to block the GPU. No longer is a sync_fence_wait_async() performed nor is there an extra semaphore created. To implement this fast path the semaphore memory had to be shared between channels. Previously since a new semaphore was created every time through gk20a_channel_semaphore_wait_fd() what address space a semaphore was mapped into was irrelevant. However, when using the fast path a sempahore may be released on one address space but acquired in another. Sharing the semaphore memory was done by making a fixed GPU mapping in all channels. This mapping points to the semaphore memory (the so called semaphore sea). This global fixed mapping is read-only to make sure no semaphores can be incremented (i.e released) by a malicious channel. Each channel then gets a RW mapping of it's own semaphore. This way a channel may only acquire other channel's semaphores but may both acquire and release its own semaphore. The gk20a fence code was updated to allow introspection of the GPU backed fences. This allows detection of when the fast path can be taken. If the fast path cannot be used (for example when a fence is sync-pt backed) the original slow path is still present. This gets used when the GPU needs to wait on an event from something which only understands how to use sync-pts. Bug 1732449 JIRA DNVGPU-12 Change-Id: Ic0fea74994da5819a771deac726bb0d47a33c2de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1133792 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: set preempt state in golden ctx initThomas Fleury2016-06-28
| | | | | | | | | | | | | | | | | | Some parameters like gfxp_wfi_timeout are context switched. Once context has been initialized with default values (sw_ctx_load), we need to ensure that preemption state is properly set before saving golden ctx image. Bug 1593548 Jira VFND-1894 Change-Id: Ib1ba03f4ca1606302b1cf1f0738d3610a162a5c6 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1168662 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add init_preemption_state gr methodThomas Fleury2016-06-28
| | | | | | | | | | | | | | | | | | This method is called when setting up gr hardware. It is meant to adjust preemption parameters. Bug 1593548 Jira VFND-1894 Change-Id: I0f5aa3212bec3058a0493366bed6fe2a365c9542 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1162625 (cherry picked from commit c2e6d12570af28b3aae087401d7f670df40d40bd) Reviewed-on: http://git-master/r/1166987 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix coverity issues in sysfs/debugfsSeshendra Gadagottu2016-06-27
| | | | | | | | | | | | | | | | | | Fix coverity issues in debugfs related to null check before accessing data member. Fix coverity issues in sysfs related to error code over-write and unintilized error code. coverity ids: 20087564, 20087460, 20087461 Bug 200192125 Change-Id: If82288fca18464dca7093ce10f0beb1272489609 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1171943 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add QoS notifier for common clk frameworkDeepak Nibade2016-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Define specific QoS notifier for common clk framework and protect it with CONFIG_COMMON_CLK This new API will first get min/max requirements from pm_qos and set min/max freq values in devfreq A call to update_devfreq() will then ensure that new estimated frequency is clipped appropriately between min and max values This also ensures that frequency is set along with all the book-keeping Add below platform specific notifier callback and use it with pm_qos_add_notifier() int (*qos_notify)() If qos_notify is set, then only register the callback We currently support only one qos_id which is treated as notifier for min frequency Remove dependency on qos_id, and use appropriate QoS APIs like pm_qos_read_min/max_bound() Store devfreq's min/max frequency in struct gk20a for reference Bug 1772462 Change-Id: I63d6d17451d19c9d376b67df7db775b38929287d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1161161 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove unused codeDeepak Nibade2016-06-27
| | | | | | | | | | | | | | | | In gk20a_tsg_bind_channel_fd(), API gk20a_get_channel_from_file() already handles converting a channel fd to channel Hence explicit fget(ch_fd) is not required and is unused Coverity id : 32386 Bug 200192125 Change-Id: I28674cebf088240b8b15129a22f6743c856ed88f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1171713 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix fecs flush method existence checkThomas Fleury2016-06-24
| | | | | | | | | | | | | | | | | | Properly check that flush method exists. Before this change, it was inconditionnaly called from poll function by accident. Bug 1778951 Change-Id: Ibb8a5b7448bc4b7c475d60439e5b5bd1af9a12bf Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1168144 (cherry picked from commit 178967b91f2632e5b84ce6d11e24f2a8f49c52ea) Reviewed-on: http://git-master/r/1170549 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: Add CE engine to engine listTerje Bergstrom2016-06-24
| | | | | | | | | | | | | | | | | Add CE engine to vgpu engine list. CE engine is defined differently for different GPUs, so we also add HAL for initializing the engine info. Bug 1780185 Change-Id: I5ae265551feac08d0c4d45402dd3277514e62b2d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1169720 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Tested-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com>
* gpu: nvgpu: Handle allocators with a base of 0Alex Waterman2016-06-23
| | | | | | | | | | | | | | | | When an allocator is created with a base of 0 the first allocated block could well be 0. This appears to be an error since gk20a_balloc() normally returns 0 for error cases. This patch removes one block from the allocatable resources when base is set to 0 so that code using gk20a_balloc() does not get tricked into thinking valid allocations are OOM cases. Change-Id: I641642d3f790c4c7860d0d1381f4db6f4f72e709 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1169764 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Compute reasonable max order for allocatorAlex Waterman2016-06-23
| | | | | | | | | | | | Compute a reasonable maximum order for buddy allocators that are created with a max_order of 0. Previously the max_order was just left as 0. Change-Id: I5c2f878fcd390610a4c02ac65189138ec7db30c8 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1169763 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>