summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include
Commit message (Collapse)AuthorAge
* gpu: nvgpu: spew err for pbus interruptSeema Khowala2017-11-08
| | | | | | | | | | | | | | | | | | | | | | | Spew err message for pri_squash, fecserr and pri_timeout pbus interrupts. If FECS_TGT is set in timeout_save_0, addr, write fields are not reliable. Also timeout_save_1 is unreliable. For both squash and timeout should have correct data most of the time. Even for FECS_TGT, a timeout for a read should indicate the correct transaction as Host only supports one read at a time. It's mostly just writes to FECS that have potentially incorrect information. Bug 200246808 Bug 200350539 Change-Id: I8a992d924ff6c740a8dacecaaaf4ef257756d01d Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568860 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Introduce queries for big page sizesTerje Bergstrom2017-11-08
| | | | | | | | | | | | | | Introduce query functions for default big page size and available big page sizes. Move initialization of GPU characteristics big page sizes to the GPU characteristics query function. JIRA NVGPU-259 Change-Id: Ie66cc2fbfcd88205593056f8d5010ac2539c8bc2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593685 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add RMOS include path for rwsem.hSourab Gupta2017-11-06
| | | | | | | | | | | | | | | | The patch adds include path for rmos header file pertaining to rwsem.h Change-Id: I7f9cfac38e971e1d78e76968911df72669598b9d Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1591108 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support tuning per-ch deterministic optsKonsta Holtta2017-11-06
| | | | | | | | | | | | | | | | Add a new ioctl NVGPU_GPU_IOCTL_SET_DETERMINISTIC_OPTS to adjust deterministic options on a per-channel basis. Currently, the only supported option is to relax the no-railgating requirement on open deterministic channels. This also disallows submits on such channels, until the railgate option is reset. Bug 200327089 Change-Id: If4f0f51fd1d40ad7407d13638150d7402479aff0 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1554563 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Return error code properly from nvgpu_vm_map_linuxSami Kiminki2017-11-03
| | | | | | | | | | | | | | | | | The function nvgpu_vm_map_linux() used to return GPU VA on successful map or 0 when things didn't go smoothly. However, this scheme does not propagate the actual map error back to the userspace. So, modify the function a bit: return error and return the GPU VA via pointer on success. Bug 1705731 Change-Id: I2174b5fbaf64dcb00f9567dab1c583d6ddfa5d78 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1590961 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: fix patch buf count update for vidmem"Timo Alho2017-11-03
| | | | | | | | | | | | | | This reverts commit de399ccb0019513a5f9e8f2bcadb02486f99bc80. Bug 2012077 Change-Id: Ie608c3b41aa91f9aaed3fad240ed882a0c6f1ea2 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1591423 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: fix patch buf count update for vidmemPeter Daifuku2017-11-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | gr_gk20a_ctx_patch_write_begin() updates the patch buffer data_count when the associated graphics context memory buffer has been CPU-mapped; it was doing so by looking for a non-null cpu_va. However, if the graphics context has been allocated from vidmem, cpu_va is always 0, so we can't tell if nvgpu_mem_begin() was called for the context buffer or not. Instead: - add a cpu_accessible flag to the nvgpu_mem struct and set it in nvgpu_mem_begin() - return the value of that flag in nvgpu_mem_cpu_accessible() - gr_gk20a_ctx_patch_write_begin() now calls this new function instead of checking cpu_va. Bug 2012077 JIRA ESRM-74 Change-Id: I8401699f30b4ae7154111721c25c7ec3ff95d329 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1587293 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use nvgpu_is_enabled() for ASPMTerje Bergstrom2017-11-02
| | | | | | | | | | | | | | | | Convert disable_aspm and references to that field to use nvgpu_is_enabled(NVGPU_SUPPORT_ASPM). Initialize it from gk20a_platform struct at probe time. This removes another dependency to struct gk20a_platform. JIRA NVGPU-259 Change-Id: I32e30160f817ea275aa190dcf86c5fd594138d75 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1590124 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split ctxsw_trace API into non-Linux componentAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | Split the ctxsw trace "core" API code into <nvgpu/ctxsw_trace.h>. This is not perect though since there's some Linuxisms present in the HAL and as such that code has to be hidden by the ctxsw tracing CONFIG. But this patch should work for QNX such that it will allow the code to build as long as CONFIG_GK20A_CTXSW_TRACE is not set. Also fix the copywrite notice in the ctxsw code present under common/linux to be GPL. JIRA NVGPU-287 Change-Id: I94715864caf335b7220185492e4629d713b025e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589429 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove buffer_attrs structAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | Remove the buffer_attrs struct and replace it with a more streamlined nvgpu_ctag_buffer_info struct. This struct allows several different fields to all be passed by pointer to the various kind/compression functions in the VM map process. This path also moves several comptag/kind related functions to the core vm.c code since these functions can be reused by other OSes. Change-Id: I2a0f0a1c4b554ce4c8f2acdbe3161392e717d3bf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583984 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove user_mapped from mapped_bufAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the always true field 'user_mapped' from the mapped_buf struct. All mapped_bufs are mapped by a user request since they always originate from a dma_buf (for Linux, that is). As such there is a fair amount of logic that could be deleted. Linux specific: the own_mem_ref field was also be deleted. The logic of only storing a dma_buf ref when the buffer is mapped for the first time by a user is easy: when the mapped buffer is found in the map cache release the outstanding dma_buf ref taken earlier on in the map path. If the map cache does not have the buffer simply let the higher level map code keep the dma_buf ref. The dma_buf ref is released when the nvgpu_vm_unmap_system() call-back is called by the unmap path. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I229d136713812a7332bdadd5ebacd85d983bbbf0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583983 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: VM unmap refactoringAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-organize the unmap code to be better split between OS specific requirements and common core requirements. The new code flow works as follows: nvgpu_vm_unmap() Is the primary entrance to the unmap path. It takes a VM and a GPU virtual address to unmap. There's also an optional batch mapping struct. This function is responsible for making sure there is a real buffer and that if it's being called on a fixed mapping then the mapping will definitely be freed (since buffers are ref-counted). Then this function decrements the ref-count and returns. If the ref-count hits zero then __nvgpu_vm_unmap_ref() is called which just calls __nvgpu_vm_unmap() with the relevant batch struct if present. This is where the real work is done. __nvgpu_vm_unmap() clears the GMMU mapping, removes the mapped buffer from the various lists and trees it may be in and then calls the nvgpu_vm_unmap_system() function. This function handles any OS specific stuff and must be defined by all VM OS implementations. There's a a short cut used by some other core VM code to free mappings without going through nvgpu_vm_map(). Mostly they just directly decrement the mapping ref-count which can then call __nvgpu_vm_unmap_ref() if the ref-count hits zero. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: Ic626d37ab936819841bab45214f027b40ffa4e5a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583982 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix pte location functionsDavid Nieto2017-11-01
| | | | | | | | | | | | | | | | | | Modify the recursive loop in pte_find to make sure it is targeting the proper pde page size. JIRA NVGPUGV100-36 Change-Id: Ib3673d8d9f1bd3c907d532f9e2562ecdc5dda4af Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586739 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove Linux headers from mm_gk20a.hAlex Waterman2017-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | Delte the Linux headers and make some modifications to get rid of the minor compilation issues that resulted. - Add <linux/iommu.h> to os_linux.h - Delete #if 0 code that "flushed" a buffer in gr_gk20a.c - Delete FLUSH_CPU_DCACHE() macro - Move the cache flush definitions to <nvgpu/linux/vm.h> and include this header in sim_gk20a.c. This file will not be used by QNX so this should be fine. - Add <linux/pci_ids.h> to gp106/bios_gp106.c and gp106/mclk_gp106.c. - Move function to common/linux/dmabuf.h since it is a dmabuf related function and uses a struct device pointer as an argument. JIRA NVGPU-30 Change-Id: I11f56b98524c7fac3efa91b4686592130e5f8a46 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585510 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Linux specific GPU characteristics flagsTerje Bergstrom2017-10-26
| | | | | | | | | | | | | Make GPU characteristics flags specific to Linux code only. The rest of driver is moved to using nvgpu_is_enabled() API. JIRA NVGPU-259 Change-Id: I2faf46ef64c964361c267887b28c9d19806d6d51 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583876 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move clk_arb to linux specificMahantesh Kumbar2017-10-25
| | | | | | | | | | | | | | | | | - Clock arbiter has lot of linux dependent code so moved clk_arb.c to common/linux folder & clk_arb.h to include/nvgpu/clk_arb.h, this move helps to unblock QNX. - QNX must implement functions present under clk_arb.h as needed. JIRA NVGPU-33 Change-Id: I38369fafda9c2cb9ba2175b3e530e40d0c746601 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1582473 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Begin reorganizing VM mapping/unmappingAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move vm_priv.h to <nvgpu/linux/vm.h> and rename nvgpu_vm_map() to nvgpu_vm_map_linux(). Also remove a redundant unmap function from the unmap path. These changes are the beginning of reworking the nvgpu Linux mapping and unmapping code. The rest of this patch is just the necessary changes to use the new map function naming and the new path to the Linux vm header. Patch Series Goal ----------------- There's two major goals for this patch series. Note that these goals are not achieved in this patch. There will be subsequent patches. 1. Remove all last vestiges of Linux code from common/mm/vm.c 2. Implement map caching in the common/mm/vm.c code To accomplish this firstly the VM mapping code needs to have the struct nvgpu_mapped_buf data struct be completely Linux free. That means implementing an abstraction for this to hold the Linux stuff that mapped buffers carry about (SGT, dma_buf). This is why the vm_priv.h code has been moved: it will need to be included by the <nvgpu/vm.h> header so that the OS specific struct can be pulled into struct nvgpu_mapped_buf. Next renaming the nvgpu_vm_map() to nvgpu_vm_map_linux() is in preparation for adding a new nvgpu_vm_map() that handles the map caching with nvgpu_mapped_buf. The mapping code is fairly straight forward: nvgpu_vm_map does OS generic stuff; each OS then calls this function from an nvgpu_vm_map_<OS>() or the like that does any OS specific adjustments/management. Freeing buffers is much more tricky however. The maps are all reference counted since userspace does not track buffers and expects us to handle this instead. Ugh! Since there's ref-counts the free code will require a callback into the OS specific code since the OS specific code cannot free a buffer directly. THis make's the path for freeing a buffer quite convoluted. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I5e0975f60663a0d6cf0a6bd90e099f51e02c2395 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578896 GVS: Gerrit_Virtual_Submit Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance class error debug infoseshendra Gadagottu2017-10-23
| | | | | | | | | | | | | | | | | | | | | | | | | Updated gk20a_gr_handle_class_error with sub channel info, mme related info. Also printing the correct method info from isr_data->offset by left shifting it by 2. Generated following hw definitions for gk20a/gm20b/gp10b/gp106 to dump relevant data in gk20a_gr_handle_class_error: gr_trapped_addr_mme_generated_v gr_trapped_addr_datahigh_v gr_trapped_addr_priv_v gr_trapped_data_lo_r gr_trapped_data_mme_r gr_trapped_data_mme_pc_v Bug 2003671 Change-Id: I02e15ef16d7498b6a7dc2af547a14e84d570e8a7 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574061 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: falcon interface/HAL updateMahantesh Kumbar2017-10-21
| | | | | | | | | | | | | | | | | | | - Add methods to read/write falcon mailbox at interface layer - Created falcon mailbox read/write HAL - Added HAL methods to read/write mailbox - Added macro to get next block based on address - Added macro to get IMEM tag using IMEM address - Added ucode header format Change-Id: I879b1df4538d403cac40fd4ed6e723190f62922c Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> (cherry picked from commit 30e8b76a7be9d9e6d8225bdc08e441f408692f63) Reviewed-on: https://git-master.nvidia.com/r/1509469 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add nvdec falcon supportMahantesh Kumbar2017-10-21
| | | | | | | | | | | | | | | | - Added "nvgpu_flacon nvdec_flcn" member to gk20a - Added base address & flacon id of NVDEC falcon - Included nvdec falcon to access common falcon code - Enabled nvdec falcon support for GP106 - Disabled nvdec falcon support for iGPU - Made call to enable nvdec falcon support if supported Change-Id: Ia928d082275a720e4e8c6852384e489c8ec444f8 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> (cherry picked from commit 3d80aeff295bad8365af6022555ad151f1a32cf6) Reviewed-on: https://git-master.nvidia.com/r/1564305 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add VIDMEM debuggingAlex Waterman2017-10-20
| | | | | | | | | | | | | | Add some VIDMEM debugging to help track the background free thread and allocs/frees. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88471b29d2a42c104666b111d0d3014110c9d56c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576330 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Convert VIDMEM work_struct to threadAlex Waterman2017-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the work_struct used by the vidmem background clearing to a thread to make it more cross platform. The thread waits on a condition variable to determine when work needs to be done. The signal comes from the DMA API when it enqueues a new nvgpu_mem that needs clearing. Add logic for handling suspend: the CE cannot be accessed while the GPU is suspended. As such the background thread must be paused while the GPU is suspended and the CE is not available. Several other changes were also made: o Move the code that enqueues a nvgpu_mem from the DMA API code to a function in the VIDMEM code. o Move nvgpu_vidmem_get_pending_alloc() to the Linux specific code as this function is only used there. It's a trivial function that QNX can easily implement as well. o Remove the was_empty logic from the enqueue. Now just always signal the condition variable when anew nvgpu_mem comes in. o Move CE suspend to after MM suspend. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574498 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move dma_buf usage from mm_gk20a.cAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move most of the dma_buf usage present in the mm_gk20a.c code out to Linux specific code and some commom/mm code. There's two primary groups of code: 1. dma_buf priv field code (for holding comptag data) 2. Comptag usage that relies on dma_buf pointers For (1) the dma_buf code was simply moved to common/linux/dmabuf.c since most of this code is clearly Linux specific. The comptag code was a bit more complicated since there is two parts to the comptag code. Firstly there's the code that manages the comptag memory. This is essentially a simple allocator. This was moved to common/mm/comptags.c since it can be shared across all chips. The second set of code is moved to common/linux/comptags.c since it is the interface between dma_bufs and the comptag memory. Two other fixes were done as well: - Add struct gk20a to the comptag allocator init so that the proper nvgpu_vzalloc() function could be used. - Add necessary includes to common/linux/vm_priv.h. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566628 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: fix multiple build issues for QNXShashank Singh2017-10-17
| | | | | | | | | | | | | | | | | | - timers and bug header files should be included directly. Linux maybe getting it via indirect includes. Also, QNX requires non-static function to be declared explicitly. Change-Id: I2458654f535d8079347e4a0be744530f56388238 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577527 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Tested-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: compile nvgpu allocator for QNXSourab Gupta2017-10-17
| | | | | | | | | | | | | | | | The patch has the changes for compilation of common nvgpu allocator for QNX. This includes some cross-OS compilation changes and removing some Linux'isms from the allocator. Change-Id: Ib1ecceec77b497513a196597bff4441615577548 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540306 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Abstract rw_semaphore implementationTerje Bergstrom2017-10-14
| | | | | | | | | | | | | | Abstract implementation of rw_semaphore. In Linux it's implemented in terms of rw_semaphore. Change deterministic_busy to use the new implementation. JIRA NVGPU-259 Change-Id: Ia9c1b6e397581bff7711c5ab6fb76ef6d23cff87 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570405 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: include nvgpu/types in pramin.hDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | In include/nvgpu/pramin.h, we right include linux specific header linux/types.h. But since this is a common header, we should include nvgpu/types.h which is generic and linux independent Jira NVGPU-259 Change-Id: I0acff92b3de9431e615faa6de26306cccd5ca443 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576930 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: enhance pbdma debug infoseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | Enhanced pbdma error output to print pbdma interrupt error. Generated following hw definitions to dump relevant data: pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Updated gk20a_dump_pbdma_status to dump this additional info: pbdma_gp_put_r pbdma_gp_get_r pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Bug 2003671 Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572182 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract IO aperture accessorsTerje Bergstrom2017-10-13
| | | | | | | | | | | | | | | | | | | | | Add abstraction of IO aperture accessors. Add new functions gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to aperture fields from common code. Implement Linux version of the abstraction by moving gk20a_readl() and gk20a_writel() to new Linux specific io.c. Move the fields defining IO aperture to nvgpu_os_linux. Add t19x specific IO aperture initialization functions and add t19x specific section to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569698 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_allocAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs still need to be used, however, since sharing buffers in userspace is done by dma_buf FD. The best way to pass the vidmem buf through the dma_buf is by SGL pointer. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Separate vidmem alloc from Linux codeAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Split the core vidmem allocation from the Linux component of vidmem allocation. The core vidmem allocation allocates the nvgpu_mem struct that defines the vidmem buffer in the core MM code. The Linux code now allocates some Linux specific stuff (dma_buf, etc) and also allocates the core vidmem buf. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename vidmem APIsAlex Waterman2017-10-13
| | | | | | | | | | | | | | | Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure consistency and that all the non-static vidmem functions are properly namespaced. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add syncpoint read mapseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | For sync-point read map: 1. Added nvgpu_mem memory allocator in gk20a struct and allocated memory for this in gk20a_finalize_poweron() and freed this memory in gk20a_remove(). 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct for read map in vm. Added nvgpu_quiesce() in nvgpu_remove() before freeing syncpoint read map to ensure that nvgpu is idle. JIRA GPUT19X-2 Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1514338 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add API to create nvgpu_mem from physseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | Added new memory API _nvgpu_mem_create_from_phys for creating nvgpu_mem from physical memory aperture. With this new API, avoided usage of linux specific "struct page" in general code and moved this code to common linux code. This API internally uses __nvgpu_mem_create_from_pages for creating nvgpu_mem from physical pages. JIRA GPUT19X-2 Change-Id: Iaf0193a7c33e71422e4ddabde01edf46f5a81794 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1571073 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: ce: tsg and large vidmem supportDavid Nieto2017-10-13
| | | | | | | | | | | | | | | | | | | | | Some GPUs require all channels to be on TSG and also have larger than 4GB vidmem sizes which were not supported on the previous CE2 code. This change creates a new property to track if the copy engine needs to encapsulate its kernel context on tsg and also modifies the copy engine code to support much larger copies without dramatically increasing the PB size. JIRA: EVLR-1990 Change-Id: Ieb4acba0c787eb96cb9c7cd97f884d2119d445aa Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573216 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com>
* gpu: nvgpu: Move PRAMIN functions to nvgpu_memTerje Bergstrom2017-10-10
| | | | | | | | | | | | | | PRAMIN batch access functions are only used by nvgpu_mem. The way the functions are written is Linux specific, so move the implementation from common PRAMIN code. JIRA NVGPU-259 Change-Id: I6e2aba08c98568c651a86fe8ca7f9f5220d67348 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569697 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VIDMEM support from mm_gk20a.cAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Split VIDMEM support into its own code files organized as such: common/mm/vidmem.c - Base vidmem support common/linux/vidmem.c - Linux specific user-space interaction include/nvgpu/vidmem.h - Vidmem API definitions Also use the config to enable/disable VIDMEM support in the makefile and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible from the source code. And lastly update a while-loop that iterated over an SGT to use the new for_each construct for iterating over SGTs. Currently this organization is not perfectly adhered to. More patches will fix that. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add nvgpu/errno.hAlex Waterman2017-10-10
| | | | | | | | | | | | | | | | Add an <nvgpu/errno.h> header file to explicitly include the -E* error messages. Useful for header files with static inlines that return error messages. In actual C code normally enough Linux/QNX headers bleed in to get the error messages but header files with sparse includes do not have this luxury. Change-Id: I4833c7a6003578b9792bbf6b14cce3bd0adfec22 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1573307 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: falcon: Qualify unsigned HW constantsMahantesh Kumbar2017-10-04
| | | | | | | | | | | | | | | -Falcon HW header re-generate for gk20a, gm20b, gp10b & gp106. -Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions Change-Id: Ifdaac2c697ee7ba8be627e059bf18024a67bbd27 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570775 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: change logging enum namesShashank Singh2017-10-04
| | | | | | | | | | | | | | | | | Add nvgpu prefix to logging enums. In debug mode QNX, Integrity have already a hashdef DEBUG and it is conflicting with logging enum DEBUG Change-Id: I882e566302842f8b79daf11d5f0850dec222cfea Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570193 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add for_each construct for nvgpu_sgtsAlex Waterman2017-10-04
| | | | | | | | | | | | | Add a macro to iterate across nvgpu_sgts. This makes it easier on developers who may accidentally forget to move to the next SGL. JIRA NVGPU-243 Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566627 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: skip clk gating prog for sim/emu.Deepak Goyal2017-10-04
| | | | | | | | | | | | | | | For Simualtion/Emulation platforms,clock gating should be skipped as it is not supported. Added new flags "can_"X"lcg" to check platform capability before doing SLCG,BLCG and ELCG. Bug 200314250 Change-Id: I4124d444a77a4c06df8c1d82c6038bfd457f3db0 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566049 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename ops.mm.get_physical_addr_bitsAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | Rename get_physical_addr_bits and related functions to something that more clearly conveys what they are doing. The basic idea of these functions is to translate from a physical GPU address to a IOMMU GPU address. To do that a particular bit (that varies from chip to chip) is added to the physical address. JIRA NVGPU-68 Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542966 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add common vaddr translate functionAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | | | | | | | Add a function to do address translation for IOMMU capable GPUs. When an iGPU is behind and IOMMU it can pick whether to use that IOMMU for translation by adding a bit to physical addresses. This function takes care of that. However, this required an abstracted nvgpu_iommuable() API to check whether a GPU is behind an IOMMU. This patch adds that API for Linux. JIRA NVGPU-68 Change-Id: I489d14475167c019c294407372395df78c8b5feb Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542965 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Remove sg_phys() from GMMU codeAlex Waterman2017-10-04
| | | | | | | | | | | | | | | | | | Remove the last sg_phys() call from the GMMU code and replace it with a generic nvgpu_mem API. This new API, nvgpu_mem_get_phys_addr(), returns the physical address of an nvgpu_mem struct. Also, implement this new API in the Linux specific nvgpu_mem code since it requires access to the underlying SGT/SGL. JIRA NVGPU-68 Change-Id: Idf88701a2a8515464c658c26e0de493c82ff850d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1542964 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove GR falcons bootstrap support using VAMahantesh Kumbar2017-10-03
| | | | | | | | | | | | | | | | | | | | - GR falcons bootstrap can be done using physical or virtual address by setting flag usevamask in PMU interface PMU_ACR_CMD_ID_BOOTSTRAP_MULTIPLE_FALCONS command - With this change always setting to physical address support & removed virtual address support along with code removal. - Removed Linux specific code used to get info regarding WPR VA. JIRA NVGPU-128 Change-Id: Id58f3ddc4418d61126f2a4eacb50713d278c10a0 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572468 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: gp106: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: I4221521e00442b044ff70007b7971f44cc3c4f67 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567986 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: gp10b: Qualify unsigned HW constantsTerje Bergstrom2017-09-27
| | | | | | | | | | | | Re-generate hardware headers so that all unsigned constants are qualified with postfix U. This removes the need for compiler to do implicit signed->unsigned conversions. Change-Id: I33d46bb103d083316266eb1d325ca9f1525bf047 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1567985 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com>