summaryrefslogtreecommitdiffstats
path: root/drivers/gpu
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Call t19x specific dGPU initializationTerje Bergstrom2017-10-16
| | | | | | | | Change-Id: I888a03199f25a7f55a8ddd099c1980f8170a9a4a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1579122 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Allocate memory for dGPU platformMahantesh Kumbar2017-10-16
| | | | | | | | | | | | | | | | | | | | | | | | - Allocate memory for platform data for each dgpu device detected on PCIe, - Copy detected device static data to allocated platform data - Free allocated space in nvgpu_pci_remove() Issue: Static platform data is overwritten When two same SKU/device-id dGPU connected on single platform which cause accessing wrong dGPU space, Fix: Fixing issue by allocating space dynamically for each dGPU device detected & copy data from detected dGPU static data. JIRA NVGPUGV100-18 Change-Id: Idf2d2d6d2016b831c21b9da6f8ee38b34304bd12 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577913 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: replace wait_queue_head_t with nvgpu_condDebarshi Dutta2017-10-16
| | | | | | | | | | | | | Replace existing usages of wait_queue_head_t with struct nvgpu_cond and using the corresponding APIs in order to reduce Linux dependencies in NVGPU. JIRA NVGPU-205 Change-Id: I85850369c3c47d3e1704e4171b1d172361842423 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575778 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove bios_blob debugfsTerje Bergstrom2017-10-16
| | | | | | | | | | | | | | | | | | | linux/debugfs.h was included in gk20a.h because of the debugfs entry bios_blob, which can be used for checking contents of VBIOS. That has never been used, so instead of abstracting it, this patch removes the feature altogether. Two files were using debugfs but did not #include <linux/debugfs.h>. They failed to build now that gk20a.h no longer #includes it, so added explit #include. JIRA NVGPU-259 Change-Id: Ie1ea9be1a8920441b1616f34e64e505e6e10e38c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570404 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract rw_semaphore implementationTerje Bergstrom2017-10-14
| | | | | | | | | | | | | | Abstract implementation of rw_semaphore. In Linux it's implemented in terms of rw_semaphore. Change deterministic_busy to use the new implementation. JIRA NVGPU-259 Change-Id: Ia9c1b6e397581bff7711c5ab6fb76ef6d23cff87 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570405 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move time correlation arg check to LinuxTerje Bergstrom2017-10-14
| | | | | | | | | | | | | Move code checking the time correlation IOCTL arguments to Linux IOCTL code. JIRA NVGPU-259 Change-Id: Id3ef6839ee96844104d87e943b6940b952261796 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578700 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove use of multichar constantsDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | | | | In gk20a.h, we use these multichar constants 'DONE', '0R32', '0W32' But these constants fail compilation on some cross-os compilers Hence remove them by creating a specific MULTICHAR_TAG() All the User space components also form these constants in similar fashion Jira NVGPU-259 Change-Id: Ibe83617a8d15b657e450fdd59dbc171b3d822842 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576933 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: forward declare gk20a_debug_output in gr_gp10b.hDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | Forward declare struct gk20a_debug_output in gr_gp10b.h since we do not explicitly include any header file for same Jira NVGPU-259 Change-Id: I9a4048591363e2ddbe7e8b3c2e56ab1e4eae0b3a Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576931 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: include nvgpu/types in pramin.hDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | In include/nvgpu/pramin.h, we right include linux specific header linux/types.h. But since this is a common header, we should include nvgpu/types.h which is generic and linux independent Jira NVGPU-259 Change-Id: I0acff92b3de9431e615faa6de26306cccd5ca443 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576930 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: protect stack_trace with configDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | We use struct stack_trace in struct channel_gk20a_ref_action But since channel_gk20a_ref_action is needed only if GK20A_CHANNEL_REFCOUNT_TRACKING is set, protect it with that config Jira NVGPU-259 Change-Id: I6b2d6f470bf924bb1ddfd31ba9968b56c63c2372 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576929 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use config in makefile for debug_kmem.cAlex Waterman2017-10-13
| | | | | | | | | | | | | Conditionally compile debug_kmem.c in the Makefile since the entire file is not needed when CONFIG_NVGPU_TRACK_KMEM_USAGE is not enabled. Removing as many conditional compilation flags from the C code itself is highly desirable from a code complexity standard. Change-Id: If1e87986ca1ee6d71485f1ab40f10c1645c1a628 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576543 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: flatten out vgpu halPeter Daifuku2017-10-13
| | | | | | | | | | | | | | | Instead of calling the native HAL init function then adding multiple layers of modification for VGPU, flatten out the sequence so that all entry points are set statically and visible in a single file. JIRA ESRM-30 Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574616 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance pbdma debug infoseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | Enhanced pbdma error output to print pbdma interrupt error. Generated following hw definitions to dump relevant data: pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Updated gk20a_dump_pbdma_status to dump this additional info: pbdma_gp_put_r pbdma_gp_get_r pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Bug 2003671 Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572182 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract IO aperture accessorsTerje Bergstrom2017-10-13
| | | | | | | | | | | | | | | | | | | | | Add abstraction of IO aperture accessors. Add new functions gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to aperture fields from common code. Implement Linux version of the abstraction by moving gk20a_readl() and gk20a_writel() to new Linux specific io.c. Move the fields defining IO aperture to nvgpu_os_linux. Add t19x specific IO aperture initialization functions and add t19x specific section to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569698 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_allocAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs still need to be used, however, since sharing buffers in userspace is done by dma_buf FD. The best way to pass the vidmem buf through the dma_buf is by SGL pointer. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Separate vidmem alloc from Linux codeAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Split the core vidmem allocation from the Linux component of vidmem allocation. The core vidmem allocation allocates the nvgpu_mem struct that defines the vidmem buffer in the core MM code. The Linux code now allocates some Linux specific stuff (dma_buf, etc) and also allocates the core vidmem buf. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename vidmem APIsAlex Waterman2017-10-13
| | | | | | | | | | | | | | | Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure consistency and that all the non-static vidmem functions are properly namespaced. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add syncpoint read mapseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | For sync-point read map: 1. Added nvgpu_mem memory allocator in gk20a struct and allocated memory for this in gk20a_finalize_poweron() and freed this memory in gk20a_remove(). 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct for read map in vm. Added nvgpu_quiesce() in nvgpu_remove() before freeing syncpoint read map to ensure that nvgpu is idle. JIRA GPUT19X-2 Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1514338 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add API to create nvgpu_mem from physseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | Added new memory API _nvgpu_mem_create_from_phys for creating nvgpu_mem from physical memory aperture. With this new API, avoided usage of linux specific "struct page" in general code and moved this code to common linux code. This API internally uses __nvgpu_mem_create_from_pages for creating nvgpu_mem from physical pages. JIRA GPUT19X-2 Change-Id: Iaf0193a7c33e71422e4ddabde01edf46f5a81794 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1571073 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: ctxsw_trace_tsg/channel_reset call orderSeema Khowala2017-10-13
| | | | | | | | | | | | For non fake mmu fault, both tsg and ch pointers could be valid. If tsg pointer is non null, issue ctxsw_trace for tsg instead of channel only. Change-Id: I161c40e8d43c7ae4d953ef4768ad75d4e993c87e Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577915 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: WAR: unbind_channel_verify_status ret valSeema Khowala2017-10-13
| | | | | | | | | | | | | | Do not return error if channel to be removed has NEXT set. This is a WAR until proper fix is identified and implemented. Bug 200327095 Change-Id: Ia77f3b834e8e577ac2dad8281f1dd562079adcef Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577133 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix channel status verify sequenceDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | While unbindin a channel from TSG, we first disable all the channels, then examine the status of channel being removed in gk20a_fifo_tsg_unbind_channel_verify_status(), and if this API fails we re-enable all the channel and kill whole TSG And in gk20a_fifo_tsg_unbind_channel_verify_status() we first check ctx_reload and fault status and then check NEXT status If channel has NEXT set we bail out But since we have already changed the TSG ctx_reload status re-enabling all channels in TSG might cause issues Hence fix this by correcting sequence so that we first ensure that NEXT is not set on channel and then only alter the status Bug 200327095 Change-Id: I4f0786bc507fad5462d4cdd8d0ca91ea611ee3b5 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575905 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ce: tsg and large vidmem supportDavid Nieto2017-10-13
| | | | | | | | | | | | | | | | | | | | | Some GPUs require all channels to be on TSG and also have larger than 4GB vidmem sizes which were not supported on the previous CE2 code. This change creates a new property to track if the copy engine needs to encapsulate its kernel context on tsg and also modifies the copy engine code to support much larger copies without dramatically increasing the PB size. JIRA: EVLR-1990 Change-Id: Ieb4acba0c787eb96cb9c7cd97f884d2119d445aa Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573216 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com>
* gpu: nvgpu: make channel worker wait interruptibleKonsta Holtta2017-10-12
| | | | | | | | | | | | | | | | | Change the wait for work pending condition to interruptible in the channel worker thread, as there's no reason to be noninterruptible. A noninterruptible wait, even one with a timeout, causes the worker to be printed in the Linux blocked tasks list which is confusing. Change the cond signal to interruptible to match this. Change-Id: I71d848b7f449a5d53fecae90c6a450c98c675c7f Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570166 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: vgpu: fix coverity of vgpu_fifo_update_runlist_locked()Richard Zhao2017-10-12
| | | | | | | | | | | | | | coverity defect id : 2630326: parameter_hidden: declaration hides parameter "chid" (declared at line 532) Bug 200291879 Change-Id: I7c7189d692500c5e4df07660cdb14f746ed0e69b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576517 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: no hv support for write_sm_error_statePeter Daifuku2017-10-12
| | | | | | | | | | | | | | There is no current need for a virtualized version of nvgpu_dbg_gpu_ioctl_write_single_sm_error_state, so return -ENOSYS when virtual. Bug 200331110 Change-Id: I2a8cf07b2962de4c91b752b276678bee4eea6e80 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568906 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove SGL reference from mm_gk20a.cAlex Waterman2017-10-11
| | | | | | | | | | | | | | | | | | | | Remove an SGL reference from the mm_gk20a.c code. This code is common code and as such all linuxisms need to be fixed. It just so happens that this particular function is only used by the CDE code which is only present in Linux. So simply move this function over to the CDE code. JIRA NVGPU-30 JIRA NVGPU-225 JIRA NVGPU-226 Change-Id: Ifb0cb427c742c6d9cada382ace4a52f52474c379 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576436 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove phys_addr_t from common codeAlex Waterman2017-10-11
| | | | | | | | | | | | | | | Remove phys_addr_t from common code and replace it with u64. This faciliates QNX compiling the common code since phys_addr_t is a Linux specific type. JIRA NVGPU-30 JIRA NVGPU-226 Change-Id: I15fe2078f9cd0b07c7e90ad6e359c493afa56714 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576432 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: memset alloced buffers on freeAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | When freeing kmalloc and vmalloc buffers memset them to zero before freeing them with the kernel APIs. This is only done if CONFIG_NVGPU_TRACK_MEM_USAGE is set since this adds obvious overhead to the driver. However, it is an incredibly useful debug tool, so it's nice to have. This could be done by enabling Linux kernel configs as well, but not all OSes may have such a feature so building it into nvgpu may prove quite useful. Change-Id: I7a6a9a6ab4f3606a73a90b354c5a4a7b9cd4d947 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575565 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: kmem debug bitrot updateAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | Fix bitrot that was incurred when large amounts of the debugfs stuff was moved the Linux struct. Since the debugfs debugging is largely hidden under a config for the kmem code the necessary changes for the kmem debugging were missed until now. Change-Id: I105549ce39343e503212e302f39ede36c6ea5194 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575564 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: report GR_SEMAPHORE_TIMEOUT on PBDMA sema timeoutKonsta Holtta2017-10-10
| | | | | | | | | | | | | | | | | | As with GR's semaphore acquires that timeout, report NVGPU_CHANNEL_GR_SEMAPHORE_TIMEOUT to userspace in the error notifier also when a semaphore acquire timeout interrupt is received from PBDMA. This timeout is used when the kernel watchdog timer is enabled. Bug 1782480 Change-Id: I1ceb8632548c5e89febb2b80a5850116a2d4b670 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574293 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: Declare gk20a_user_*init() in ioctl.hTerje Bergstrom2017-10-10
| | | | | | | | | | | | | | The functions are gk20a_user_init() and gk20a_user_deinit() are defined in ioctl.c. Move declaration of the functions to new header file ioctl.h. JIRA NVGPU-259 Change-Id: If348b51e9032083f252a7c7717ed7bc153dbba52 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569696 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move gk20a->busy_lock to os_linuxTerje Bergstrom2017-10-10
| | | | | | | | | | | | | gk20a->busy_lock is a Linux specific rw_semaphore used only by Linux code. Move it to os_linux. JIRA NVGPU-259 Change-Id: I220a8a080a5050732683b875d3c1d0539ba0f40e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569695 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clean up channel open/release declaresDeepak Nibade2017-10-10
| | | | | | | | | | | | | | | | | | | Below APIs are already declared in ioctl_channel.h, and hence remove duplicate declaration from channel_gk20a.h gk20a_channel_open() gk20a_channel_ioctl() gk20a_channel_release() And move declaration of gk20a_channel_open_ioctl() from channel_gk20a.h to ioctl_channel.h Jira NVGPU-259 Change-Id: I46702ca481e41a19f92f4fe0169f95e31360abe0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573106 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support for pre-os FWDavid Nieto2017-10-10
| | | | | | | | | | | | | | | | | | | Pre-os firmware takes care, among others, of the control of FAN till the driver takes over its control. On some GPUs not enabling this FW can lead tp physical board damage, hence it is needed to run this firmware. JIRA: NVGPUGV100-9 Change-Id: I18d54cfd5eb64ecec79c5dae67ac8d5bb1facf36 Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1549035 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add null check for perfbuffer enable and disable dbg opsSeema Khowala2017-10-10
| | | | | | | | | | | | This is needed to disable/enable features on new chips Bug 200352825 Change-Id: I02eb58e6fdd554ed20866fe8a8553a667541f512 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574481 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: vgpu: set tsg_verify_channel_status to NULLPeter Daifuku2017-10-10
| | | | | | | | | | | | | | | | | | | | | vgpu inherits the HAL from the native implementation. But currently, the tsg_veriy_channel_status entry point is not supported on vgpu, so make sure to NULL it. Bug 200327095 Change-Id: Ic3604c6ed9bb7a2cddaed8426ce8d291a4adff1a Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573331 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com>
* gpu: nvgpu: Move PRAMIN functions to nvgpu_memTerje Bergstrom2017-10-10
| | | | | | | | | | | | | | PRAMIN batch access functions are only used by nvgpu_mem. The way the functions are written is Linux specific, so move the implementation from common PRAMIN code. JIRA NVGPU-259 Change-Id: I6e2aba08c98568c651a86fe8ca7f9f5220d67348 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569697 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VIDMEM support from mm_gk20a.cAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Split VIDMEM support into its own code files organized as such: common/mm/vidmem.c - Base vidmem support common/linux/vidmem.c - Linux specific user-space interaction include/nvgpu/vidmem.h - Vidmem API definitions Also use the config to enable/disable VIDMEM support in the makefile and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible from the source code. And lastly update a while-loop that iterated over an SGT to use the new for_each construct for iterating over SGTs. Currently this organization is not perfectly adhered to. More patches will fix that. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add nvgpu/errno.hAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | Add an <nvgpu/errno.h> header file to explicitly include the -E* error messages. Useful for header files with static inlines that return error messages. In actual C code normally enough Linux/QNX headers bleed in to get the error messages but header files with sparse includes do not have this luxury. Change-Id: I4833c7a6003578b9792bbf6b14cce3bd0adfec22 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573307 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix double iteration in gmmu updateKonsta Holtta2017-10-10
| | | | | | | | | | | | | | | | | | | | __nvgpu_gmmu_do_update_page_table() uses nvgpu_sgt_for_each_sgl to loop through the entries of a buffer to be mapped, so when continue is used, the sgl entry must not be reassigned again like it was before with a pure while-and-update loop. Delete a reassignment to fix a case where sgl = sgl->next could happen twice. Bug 2002279 Bug 2001466 Change-Id: I47c8b853d4b35304740cd4e8a840df02fcd23054 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1575279 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Timo Alho <talho@nvidia.com>
* gpu: nvgpu: falcon: Qualify unsigned HW constantsMahantesh Kumbar2017-10-04
| | | | | | | | | | | | | | | -Falcon HW header re-generate for gk20a, gm20b, gp10b & gp106. -Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions Change-Id: Ifdaac2c697ee7ba8be627e059bf18024a67bbd27 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570775 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: change logging enum namesShashank Singh2017-10-04
| | | | | | | | | | | | | | | | | Add nvgpu prefix to logging enums. In debug mode QNX, Integrity have already a hashdef DEBUG and it is conflicting with logging enum DEBUG Change-Id: I882e566302842f8b79daf11d5f0850dec222cfea Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570193 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: kill TSG if channel has NEXT set while closingDeepak Nibade2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | Currently if channel has NEXT bit set while closing the channel we just print an error and continue channel unbind sequence from TSG But since channel with NEXT set is active killing it can potentially corrupt the TSG context and cause unpredictable errors on remaining channels/TSG Hence fix this by killing whole TSG context if channel being closed has NEXT bit set if gk20a_fifo_tsg_unbind_channel() API returns error, kill the TSG otherwise continue with channel unbind sequence Bug 200327095 Change-Id: I2abf1a3db8ba6f105b6ca86e78006c7b2a7726cc Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568566 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: verify channel status while closing per-platformDeepak Nibade2017-10-04
| | | | | | | | | | | | | | | | We right now call gk20a_fifo_tsg_unbind_channel_verify_status() to verify channel status while unbinding a channel from TSG while closing Add support to do this verification per-platform and keep this disabled for vgpu platforms Bug 200327095 Change-Id: I19fab41c74d10d528d22bd9b3982a4ed73c3b4ca Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572368 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Combine VIDMEM and SYSMEM map pathsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Combine the VIDMEM and SYSMEM GMMU mapping paths into one single function. With the proper nvgpu_sgt abstraction in place the high level mapping code can treat these two types of maps identically. The benefits of this are immediate: a change to the mapping code will have much more testing coverage - the same code runs on both vidmem and sysmem. This also makes it easier to make changes to mapping code since the changes will be testable no matter what type of mem is being mapped. JIRA NVGPU-68 Change-Id: I308eda03d426966ba8e1d4892c99cf97b45c5993 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566706 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add for_each construct for nvgpu_sgtsAlex Waterman2017-10-04
| | | | | | | | | | | | | Add a macro to iterate across nvgpu_sgts. This makes it easier on developers who may accidentally forget to move to the next SGL. JIRA NVGPU-243 Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566627 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: skip clk gating prog for sim/emu.Deepak Goyal2017-10-04
| | | | | | | | | | | | | | | For Simualtion/Emulation platforms,clock gating should be skipped as it is not supported. Added new flags "can_"X"lcg" to check platform capability before doing SLCG,BLCG and ELCG. Bug 200314250 Change-Id: I4124d444a77a4c06df8c1d82c6038bfd457f3db0 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566049 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add common vaddr translate functionAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Add a function to do address translation for IOMMU capable GPUs. When an iGPU is behind and IOMMU it can pick whether to use that IOMMU for translation by adding a bit to physical addresses. This function takes care of that. However, this required an abstracted nvgpu_iommuable() API to check whether a GPU is behind an IOMMU. This patch adds that API for Linux. JIRA NVGPU-68 Change-Id: I489d14475167c019c294407372395df78c8b5feb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542965 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>