summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/include
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Add synchronization to comptag alloc and clearingSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | Comptags allocation and clearing was not synchronized for a buffer. Fix this race by serializing the operations with the gk20a_dmabuf_priv lock. While doing that, add an error check in the cbc_ctrl call. Bug 1902982 Change-Id: Icd96f1855eb5e5340651bcc85849b5ccc199b821 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597904 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Always do full buffer compbits allocsSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | Remove parameter 'lines' from gk20a_alloc_or_get_comptags() and nvgpu_ctag_buffer_info. We're always doing full buffer allocs anyways. This simplifies the code a bit. Bug 1902982 Change-Id: Iacfc9cdba8cb75b31a7d44b175660252e09d605d Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597131 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Simplify compbits alloc and add needs_clearSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | | | | Simplify compbits alloc by making the alloc function re-callable for the buffer, and making it return the comptags info. This simplifies the calling code: alloc_or_get vs. get + alloc + get again. Add tracking whether the allocated compbits need clearing before they can be used in PTEs. We do this, since clearing is part of the gmmu map call on vgpu, which can fail. Bug 1902982 Change-Id: Ic4ab8d326910443b128e82491d302a1f49120f5b Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1597130 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Clean up comptag data structs and allocSami Kiminki2017-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up the comptag-related data structures and allocation logic. The most important change is that we only ever try comptag allocation once to prevent incorrect map aliasing. If we were to retry the allocation on further map calls, the following situation would become possible: (1) Request compressible kind mapping for a buffer. Comptag alloc failed and we proceed with incompressible kind fallback. (2) Request another compressible kind mapping for a buffer. Comptag alloc retry succeeded and now we use the compressible kind. (3) After writes through the compressible kind mapping, the buffer is no longer legible via the fallback incompressible kind mapping. The other changes are about removing the unused comptag-related fields in gk20a_comptags and nvgpu_mapped_buf, and retrieving comptags info only for compressible buffers. We also make nvgpu_ctag_buffer_info and nvgpu_vm_compute_compression as private mm/vm.c definitions, since they're not used elsewhere. Bug 1902982 Change-Id: I0c9fe48ccc585a80dd2c05ec606a079c1c1d41f1 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595153 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wrapper for checking if bpmp runningSeema Khowala2017-11-14
| | | | | | | | | | | | | | | | | | Add nvgpu_is_bpmp_running API for checking if bpmp is running or not. This API will call tegra_bpmp_running() and return the value retured by tegra_bpmp_running() Bug 2018223 Change-Id: I42c1dbec65733fdc89a8fc3846e8c3afb2dcfb8d Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1595349 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Pass DMA allocation flags correctlyDavid Gilhooley2017-11-14
| | | | | | | | | | | | | | | | | | | | | There are flags that need to be passed to both dma_alloc and sg_alloc together. Update nvgpu_dma_alloc_flags_sys to always pass flags. Bug 1930032 Change-Id: I10c4c07d7b518d9ab6c48dd7a0758c68750d02a6 Signed-off-by: David Gilhooley <dgilhooley@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1596848 Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Hard code map_buffer_batch_limitTerje Bergstrom2017-11-13
| | | | | | | | | | | | | | | Add a hard coded #define for map_buffer_batch_limit and use that insted of querying from GPU characteristics. Also add an nvgpu_is_enabled() flag for disabling batch mapping, and set map_buffer_batch_limit to zero if batch mapping is disabled. JIRA NVGPU-388 Change-Id: Ic91feea638d0f47c5c22321886cfc75e97259dc3 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593690 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add include path for rmos vm.hSourab Gupta2017-11-13
| | | | | | | | | | | | | | The patch adds include path for vm.h rmos header file. Change-Id: Ib686424381cd2a4af58d6ca737e9ce0863a9e6e5 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589109 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: VM map path refactoringAlex Waterman2017-11-10
| | | | | | | | | | | | | | | | | | | | | | Final VM mapping refactoring. Move most of the logic in the VM map path to the common/mm/vm.c code and use the generic APIs previously implemented to deal with comptags and map caching. This also updates the mapped_buffer struct to finally be free of the Linux dma_buf and scatter gather table pointers. This is replaced with the nvgpu_os_buffer struct. JIRA NVGPU-30 JIRA NVGPU-71 JIRA NVGPU-224 Change-Id: If5b32886221c3e5af2f3d7ddd4fa51dd487bb981 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583987 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add nvgpu_os_bufferAlex Waterman2017-11-10
| | | | | | | | | | | | | | | | | | | | | | | | | Add a generic nvgpu_os_buffer type, defined by each OS, to abstract a "user" buffer. This allows the comptag interface to be used in the core code. The end goal of this patch is to allow the OS specific mapping code to call a generic mapping function that handles most of the mapping logic. The problem is a lot of the logic involves comptags which are highly dependent on the operating systems buffer management scheme. With this, each OS can implement the buffer comptag mechanics however it wishes without the core MM code caring. JIRA NVGPU-30 JIRA NVGPU-223 Change-Id: Iaf64bc52e01ef3f262b4f8f9173a84384db7dc3e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583986 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Make buf alignment genericAlex Waterman2017-11-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drastically simplify and move the aligment computation for buffers getting mapped into the SGT code. An SGT is all that is needed for computing the alignment. However, this did require that a new SGT op was added: nvgpu_sgt_iommuable() This function returns true if the passed SGT is IOMMU'able and must be implemented by an SGT implementation that has IOMMU'able buffers. If this function is left as NULL then it is assumed that the buffer is not IOMMU'able. Also cleanup the parameter ordering convention among all nvgpu_sgt functions. Previously there was a mishmash of different parameter orderings. This patch now standardizes on the gk20a first approach seen everywhere else in the driver. JIRA NVGPU-30 JIRA NVGPU-246 JIRA NVGPU-71 Change-Id: Ic4ab7b752847cf795c7cfafed5a07818217bba86 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583985 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove PTE kind logicSami Kiminki2017-11-10
| | | | | | | | | | | | | | | | Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory, kernel does not need to know the details about the PTE kinds anymore. Thus, we can remove the kind_gk20a.h header and the code related to kind table setup, as well as simplify buffer mapping code a bit. Bug 1902982 Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1560933 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: cyclestat characteristics fixesPeter Daifuku2017-11-08
| | | | | | | | | | | | | | | | | | | | | | Fix characteristics for cyclestats: - SUPPORT_TSG and SUPPORT_CYCLE_STATS_SNAPSHOT were assigned the same value - For vgpu, SUPPORT_CYCLE_STATS was set redundantly (but differently) - For vgpu, if the css buffer size is 0, set the support flag to False JIRA ESRM-88 Bug 200296210 Change-Id: Iaf98dafec55f171b5968c2a8248290284bf30922 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593939 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use only contig CBCsAlex Waterman2017-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the LTC code to only use a contiguous CompBit Cache (CBC). The original code had two allocation schemes: "physical" and "virtual" - what they meant was virtually contiguous or physically contiguous. The CBC must appear contiguous to the GPU be it either from the IOMMU or from physical pages allocated contiguously. This change makes the CBC get allocated with the FORCE_CONTIGUOUS flag if the GPU is not IOMMU'able. If we can get contiguous mem with the IOMMU then no need to force the underlying pages to be contiguous. However, not all GPUs may be IOMMU'able so we do need to handle that case. Also delete the gk20a/ltc_gk20a.[ch] code. All that remained in these files was the CBC alloc functions which were completely chip agnostic. As a result these functions were consolidated and moved to common/ltc.c. Bug 2015747 Change-Id: I3f41961b4f94378b954e7502a6b27cf0bc627375 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593666 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: spew err for pbus interruptSeema Khowala2017-11-08
| | | | | | | | | | | | | | | | | | | | | | | Spew err message for pri_squash, fecserr and pri_timeout pbus interrupts. If FECS_TGT is set in timeout_save_0, addr, write fields are not reliable. Also timeout_save_1 is unreliable. For both squash and timeout should have correct data most of the time. Even for FECS_TGT, a timeout for a read should indicate the correct transaction as Host only supports one read at a time. It's mostly just writes to FECS that have potentially incorrect information. Bug 200246808 Bug 200350539 Change-Id: I8a992d924ff6c740a8dacecaaaf4ef257756d01d Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1568860 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Introduce queries for big page sizesTerje Bergstrom2017-11-08
| | | | | | | | | | | | | | Introduce query functions for default big page size and available big page sizes. Move initialization of GPU characteristics big page sizes to the GPU characteristics query function. JIRA NVGPU-259 Change-Id: Ie66cc2fbfcd88205593056f8d5010ac2539c8bc2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593685 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add RMOS include path for rwsem.hSourab Gupta2017-11-06
| | | | | | | | | | | | | | | | The patch adds include path for rmos header file pertaining to rwsem.h Change-Id: I7f9cfac38e971e1d78e76968911df72669598b9d Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1591108 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support tuning per-ch deterministic optsKonsta Holtta2017-11-06
| | | | | | | | | | | | | | | | Add a new ioctl NVGPU_GPU_IOCTL_SET_DETERMINISTIC_OPTS to adjust deterministic options on a per-channel basis. Currently, the only supported option is to relax the no-railgating requirement on open deterministic channels. This also disallows submits on such channels, until the railgate option is reset. Bug 200327089 Change-Id: If4f0f51fd1d40ad7407d13638150d7402479aff0 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1554563 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Return error code properly from nvgpu_vm_map_linuxSami Kiminki2017-11-03
| | | | | | | | | | | | | | | | | The function nvgpu_vm_map_linux() used to return GPU VA on successful map or 0 when things didn't go smoothly. However, this scheme does not propagate the actual map error back to the userspace. So, modify the function a bit: return error and return the GPU VA via pointer on success. Bug 1705731 Change-Id: I2174b5fbaf64dcb00f9567dab1c583d6ddfa5d78 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1590961 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: fix patch buf count update for vidmem"Timo Alho2017-11-03
| | | | | | | | | | | | | | This reverts commit de399ccb0019513a5f9e8f2bcadb02486f99bc80. Bug 2012077 Change-Id: Ie608c3b41aa91f9aaed3fad240ed882a0c6f1ea2 Signed-off-by: Timo Alho <talho@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1591423 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: fix patch buf count update for vidmemPeter Daifuku2017-11-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | gr_gk20a_ctx_patch_write_begin() updates the patch buffer data_count when the associated graphics context memory buffer has been CPU-mapped; it was doing so by looking for a non-null cpu_va. However, if the graphics context has been allocated from vidmem, cpu_va is always 0, so we can't tell if nvgpu_mem_begin() was called for the context buffer or not. Instead: - add a cpu_accessible flag to the nvgpu_mem struct and set it in nvgpu_mem_begin() - return the value of that flag in nvgpu_mem_cpu_accessible() - gr_gk20a_ctx_patch_write_begin() now calls this new function instead of checking cpu_va. Bug 2012077 JIRA ESRM-74 Change-Id: I8401699f30b4ae7154111721c25c7ec3ff95d329 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1587293 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use nvgpu_is_enabled() for ASPMTerje Bergstrom2017-11-02
| | | | | | | | | | | | | | | | Convert disable_aspm and references to that field to use nvgpu_is_enabled(NVGPU_SUPPORT_ASPM). Initialize it from gk20a_platform struct at probe time. This removes another dependency to struct gk20a_platform. JIRA NVGPU-259 Change-Id: I32e30160f817ea275aa190dcf86c5fd594138d75 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1590124 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split ctxsw_trace API into non-Linux componentAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | Split the ctxsw trace "core" API code into <nvgpu/ctxsw_trace.h>. This is not perect though since there's some Linuxisms present in the HAL and as such that code has to be hidden by the ctxsw tracing CONFIG. But this patch should work for QNX such that it will allow the code to build as long as CONFIG_GK20A_CTXSW_TRACE is not set. Also fix the copywrite notice in the ctxsw code present under common/linux to be GPL. JIRA NVGPU-287 Change-Id: I94715864caf335b7220185492e4629d713b025e0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589429 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove buffer_attrs structAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | Remove the buffer_attrs struct and replace it with a more streamlined nvgpu_ctag_buffer_info struct. This struct allows several different fields to all be passed by pointer to the various kind/compression functions in the VM map process. This path also moves several comptag/kind related functions to the core vm.c code since these functions can be reused by other OSes. Change-Id: I2a0f0a1c4b554ce4c8f2acdbe3161392e717d3bf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583984 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove user_mapped from mapped_bufAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the always true field 'user_mapped' from the mapped_buf struct. All mapped_bufs are mapped by a user request since they always originate from a dma_buf (for Linux, that is). As such there is a fair amount of logic that could be deleted. Linux specific: the own_mem_ref field was also be deleted. The logic of only storing a dma_buf ref when the buffer is mapped for the first time by a user is easy: when the mapped buffer is found in the map cache release the outstanding dma_buf ref taken earlier on in the map path. If the map cache does not have the buffer simply let the higher level map code keep the dma_buf ref. The dma_buf ref is released when the nvgpu_vm_unmap_system() call-back is called by the unmap path. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I229d136713812a7332bdadd5ebacd85d983bbbf0 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583983 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: VM unmap refactoringAlex Waterman2017-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-organize the unmap code to be better split between OS specific requirements and common core requirements. The new code flow works as follows: nvgpu_vm_unmap() Is the primary entrance to the unmap path. It takes a VM and a GPU virtual address to unmap. There's also an optional batch mapping struct. This function is responsible for making sure there is a real buffer and that if it's being called on a fixed mapping then the mapping will definitely be freed (since buffers are ref-counted). Then this function decrements the ref-count and returns. If the ref-count hits zero then __nvgpu_vm_unmap_ref() is called which just calls __nvgpu_vm_unmap() with the relevant batch struct if present. This is where the real work is done. __nvgpu_vm_unmap() clears the GMMU mapping, removes the mapped buffer from the various lists and trees it may be in and then calls the nvgpu_vm_unmap_system() function. This function handles any OS specific stuff and must be defined by all VM OS implementations. There's a a short cut used by some other core VM code to free mappings without going through nvgpu_vm_map(). Mostly they just directly decrement the mapping ref-count which can then call __nvgpu_vm_unmap_ref() if the ref-count hits zero. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: Ic626d37ab936819841bab45214f027b40ffa4e5a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583982 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix pte location functionsDavid Nieto2017-11-01
| | | | | | | | | | | | | | | | | | Modify the recursive loop in pte_find to make sure it is targeting the proper pde page size. JIRA NVGPUGV100-36 Change-Id: Ib3673d8d9f1bd3c907d532f9e2562ecdc5dda4af Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1586739 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove Linux headers from mm_gk20a.hAlex Waterman2017-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | Delte the Linux headers and make some modifications to get rid of the minor compilation issues that resulted. - Add <linux/iommu.h> to os_linux.h - Delete #if 0 code that "flushed" a buffer in gr_gk20a.c - Delete FLUSH_CPU_DCACHE() macro - Move the cache flush definitions to <nvgpu/linux/vm.h> and include this header in sim_gk20a.c. This file will not be used by QNX so this should be fine. - Add <linux/pci_ids.h> to gp106/bios_gp106.c and gp106/mclk_gp106.c. - Move function to common/linux/dmabuf.h since it is a dmabuf related function and uses a struct device pointer as an argument. JIRA NVGPU-30 Change-Id: I11f56b98524c7fac3efa91b4686592130e5f8a46 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585510 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Linux specific GPU characteristics flagsTerje Bergstrom2017-10-26
| | | | | | | | | | | | | Make GPU characteristics flags specific to Linux code only. The rest of driver is moved to using nvgpu_is_enabled() API. JIRA NVGPU-259 Change-Id: I2faf46ef64c964361c267887b28c9d19806d6d51 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1583876 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move clk_arb to linux specificMahantesh Kumbar2017-10-25
| | | | | | | | | | | | | | | | | - Clock arbiter has lot of linux dependent code so moved clk_arb.c to common/linux folder & clk_arb.h to include/nvgpu/clk_arb.h, this move helps to unblock QNX. - QNX must implement functions present under clk_arb.h as needed. JIRA NVGPU-33 Change-Id: I38369fafda9c2cb9ba2175b3e530e40d0c746601 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1582473 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Begin reorganizing VM mapping/unmappingAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move vm_priv.h to <nvgpu/linux/vm.h> and rename nvgpu_vm_map() to nvgpu_vm_map_linux(). Also remove a redundant unmap function from the unmap path. These changes are the beginning of reworking the nvgpu Linux mapping and unmapping code. The rest of this patch is just the necessary changes to use the new map function naming and the new path to the Linux vm header. Patch Series Goal ----------------- There's two major goals for this patch series. Note that these goals are not achieved in this patch. There will be subsequent patches. 1. Remove all last vestiges of Linux code from common/mm/vm.c 2. Implement map caching in the common/mm/vm.c code To accomplish this firstly the VM mapping code needs to have the struct nvgpu_mapped_buf data struct be completely Linux free. That means implementing an abstraction for this to hold the Linux stuff that mapped buffers carry about (SGT, dma_buf). This is why the vm_priv.h code has been moved: it will need to be included by the <nvgpu/vm.h> header so that the OS specific struct can be pulled into struct nvgpu_mapped_buf. Next renaming the nvgpu_vm_map() to nvgpu_vm_map_linux() is in preparation for adding a new nvgpu_vm_map() that handles the map caching with nvgpu_mapped_buf. The mapping code is fairly straight forward: nvgpu_vm_map does OS generic stuff; each OS then calls this function from an nvgpu_vm_map_<OS>() or the like that does any OS specific adjustments/management. Freeing buffers is much more tricky however. The maps are all reference counted since userspace does not track buffers and expects us to handle this instead. Ugh! Since there's ref-counts the free code will require a callback into the OS specific code since the OS specific code cannot free a buffer directly. THis make's the path for freeing a buffer quite convoluted. JIRA NVGPU-30 JIRA NVGPU-71 Change-Id: I5e0975f60663a0d6cf0a6bd90e099f51e02c2395 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1578896 GVS: Gerrit_Virtual_Submit Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance class error debug infoseshendra Gadagottu2017-10-23
| | | | | | | | | | | | | | | | | | | | | | | | | Updated gk20a_gr_handle_class_error with sub channel info, mme related info. Also printing the correct method info from isr_data->offset by left shifting it by 2. Generated following hw definitions for gk20a/gm20b/gp10b/gp106 to dump relevant data in gk20a_gr_handle_class_error: gr_trapped_addr_mme_generated_v gr_trapped_addr_datahigh_v gr_trapped_addr_priv_v gr_trapped_data_lo_r gr_trapped_data_mme_r gr_trapped_data_mme_pc_v Bug 2003671 Change-Id: I02e15ef16d7498b6a7dc2af547a14e84d570e8a7 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574061 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: falcon interface/HAL updateMahantesh Kumbar2017-10-21
| | | | | | | | | | | | | | | | | | | - Add methods to read/write falcon mailbox at interface layer - Created falcon mailbox read/write HAL - Added HAL methods to read/write mailbox - Added macro to get next block based on address - Added macro to get IMEM tag using IMEM address - Added ucode header format Change-Id: I879b1df4538d403cac40fd4ed6e723190f62922c Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> (cherry picked from commit 30e8b76a7be9d9e6d8225bdc08e441f408692f63) Reviewed-on: https://git-master.nvidia.com/r/1509469 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add nvdec falcon supportMahantesh Kumbar2017-10-21
| | | | | | | | | | | | | | | | - Added "nvgpu_flacon nvdec_flcn" member to gk20a - Added base address & flacon id of NVDEC falcon - Included nvdec falcon to access common falcon code - Enabled nvdec falcon support for GP106 - Disabled nvdec falcon support for iGPU - Made call to enable nvdec falcon support if supported Change-Id: Ia928d082275a720e4e8c6852384e489c8ec444f8 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> (cherry picked from commit 3d80aeff295bad8365af6022555ad151f1a32cf6) Reviewed-on: https://git-master.nvidia.com/r/1564305 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add VIDMEM debuggingAlex Waterman2017-10-20
| | | | | | | | | | | | | | Add some VIDMEM debugging to help track the background free thread and allocs/frees. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88471b29d2a42c104666b111d0d3014110c9d56c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576330 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Convert VIDMEM work_struct to threadAlex Waterman2017-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the work_struct used by the vidmem background clearing to a thread to make it more cross platform. The thread waits on a condition variable to determine when work needs to be done. The signal comes from the DMA API when it enqueues a new nvgpu_mem that needs clearing. Add logic for handling suspend: the CE cannot be accessed while the GPU is suspended. As such the background thread must be paused while the GPU is suspended and the CE is not available. Several other changes were also made: o Move the code that enqueues a nvgpu_mem from the DMA API code to a function in the VIDMEM code. o Move nvgpu_vidmem_get_pending_alloc() to the Linux specific code as this function is only used there. It's a trivial function that QNX can easily implement as well. o Remove the was_empty logic from the enqueue. Now just always signal the condition variable when anew nvgpu_mem comes in. o Move CE suspend to after MM suspend. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574498 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactoring nvgpu_vm functionsAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This removes some usages of dma_buf from the mm_gk20a.c code, too, which helps make mm_gk20a.c less Linux specific. Also delete some header files that are no longer necessary in gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now quite close to being Linux free. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566629 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move dma_buf usage from mm_gk20a.cAlex Waterman2017-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move most of the dma_buf usage present in the mm_gk20a.c code out to Linux specific code and some commom/mm code. There's two primary groups of code: 1. dma_buf priv field code (for holding comptag data) 2. Comptag usage that relies on dma_buf pointers For (1) the dma_buf code was simply moved to common/linux/dmabuf.c since most of this code is clearly Linux specific. The comptag code was a bit more complicated since there is two parts to the comptag code. Firstly there's the code that manages the comptag memory. This is essentially a simple allocator. This was moved to common/mm/comptags.c since it can be shared across all chips. The second set of code is moved to common/linux/comptags.c since it is the interface between dma_bufs and the comptag memory. Two other fixes were done as well: - Add struct gk20a to the comptag allocator init so that the proper nvgpu_vzalloc() function could be used. - Add necessary includes to common/linux/vm_priv.h. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1566628 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: fix multiple build issues for QNXShashank Singh2017-10-17
| | | | | | | | | | | | | | | | | | - timers and bug header files should be included directly. Linux maybe getting it via indirect includes. Also, QNX requires non-static function to be declared explicitly. Change-Id: I2458654f535d8079347e4a0be744530f56388238 Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1577527 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sourab Gupta <sourabg@nvidia.com> Tested-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: compile nvgpu allocator for QNXSourab Gupta2017-10-17
| | | | | | | | | | | | | | | | The patch has the changes for compilation of common nvgpu allocator for QNX. This includes some cross-OS compilation changes and removing some Linux'isms from the allocator. Change-Id: Ib1ecceec77b497513a196597bff4441615577548 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540306 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Abstract rw_semaphore implementationTerje Bergstrom2017-10-14
| | | | | | | | | | | | | | Abstract implementation of rw_semaphore. In Linux it's implemented in terms of rw_semaphore. Change deterministic_busy to use the new implementation. JIRA NVGPU-259 Change-Id: Ia9c1b6e397581bff7711c5ab6fb76ef6d23cff87 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1570405 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: include nvgpu/types in pramin.hDeepak Nibade2017-10-13
| | | | | | | | | | | | | | | | | In include/nvgpu/pramin.h, we right include linux specific header linux/types.h. But since this is a common header, we should include nvgpu/types.h which is generic and linux independent Jira NVGPU-259 Change-Id: I0acff92b3de9431e615faa6de26306cccd5ca443 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1576930 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: enhance pbdma debug infoseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | | | | | Enhanced pbdma error output to print pbdma interrupt error. Generated following hw definitions to dump relevant data: pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Updated gk20a_dump_pbdma_status to dump this additional info: pbdma_gp_put_r pbdma_gp_get_r pbdma_gp_shadow_0_r pbdma_gp_shadow_1_r Bug 2003671 Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1572182 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract IO aperture accessorsTerje Bergstrom2017-10-13
| | | | | | | | | | | | | | | | | | | | | Add abstraction of IO aperture accessors. Add new functions gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to aperture fields from common code. Implement Linux version of the abstraction by moving gk20a_readl() and gk20a_writel() to new Linux specific io.c. Move the fields defining IO aperture to nvgpu_os_linux. Add t19x specific IO aperture initialization functions and add t19x specific section to nvgpu_os_linux. JIRA NVGPU-259 Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1569698 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_allocAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs still need to be used, however, since sharing buffers in userspace is done by dma_buf FD. The best way to pass the vidmem buf through the dma_buf is by SGL pointer. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Separate vidmem alloc from Linux codeAlex Waterman2017-10-13
| | | | | | | | | | | | | | | | | Split the core vidmem allocation from the Linux component of vidmem allocation. The core vidmem allocation allocates the nvgpu_mem struct that defines the vidmem buffer in the core MM code. The Linux code now allocates some Linux specific stuff (dma_buf, etc) and also allocates the core vidmem buf. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540708 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename vidmem APIsAlex Waterman2017-10-13
| | | | | | | | | | | | | | | Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure consistency and that all the non-static vidmem functions are properly namespaced. JIRA NVGPU-30 JIRA NVGPU-138 Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add syncpoint read mapseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | For sync-point read map: 1. Added nvgpu_mem memory allocator in gk20a struct and allocated memory for this in gk20a_finalize_poweron() and freed this memory in gk20a_remove(). 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct for read map in vm. Added nvgpu_quiesce() in nvgpu_remove() before freeing syncpoint read map to ensure that nvgpu is idle. JIRA GPUT19X-2 Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1514338 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add API to create nvgpu_mem from physseshendra Gadagottu2017-10-13
| | | | | | | | | | | | | | | | | | | | Added new memory API _nvgpu_mem_create_from_phys for creating nvgpu_mem from physical memory aperture. With this new API, avoided usage of linux specific "struct page" in general code and moved this code to common linux code. This API internally uses __nvgpu_mem_create_from_pages for creating nvgpu_mem from physical pages. JIRA GPUT19X-2 Change-Id: Iaf0193a7c33e71422e4ddabde01edf46f5a81794 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1571073 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>