summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/mm/mm.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: add cg and pg functionDebarshi Dutta2019-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new power/clock gating functions that can be called by other units. New clock_gating functions will reside in cg.c under common/power_features/cg unit. New power gating functions will reside in pg.c under common/power_features/pg unit. Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable elpg and also in gr_gk20a_elpg_protected macro to access gr registers. Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled and slcg_enabled thread safe. JIRA NVGPU-2014 Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/2025493 Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> (cherry-picked from c90585856567a547173a8b207365b3a4a3ccdd57 in dev-kernel) Reviewed-on: https://git-master.nvidia.com/r/2108406 GVS: Gerrit_Virtual_Submit Reviewed-by: Bibek Basu <bbasu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move FB reset to MC unitTerje Bergstrom2018-09-27
| | | | | | | | | | | | FB reset is done by accessing MC register. Move the code to MC unit. JIRA NVGPU-954 Change-Id: I1636887af805f016da5490af65e808f9ac015cde Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1823385 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: move header location of gk20a.hDebarshi Dutta2018-09-25
| | | | | | | | | | | | | Update header path of gk20a.h in files present in common/ to <nvgpu/gk20a.h> Jira NVGPU-597 Change-Id: I3431dae93ada9bd561454c89a0b99c5292ab4a8d Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1832024 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add HALs to implement pdb cache WARDeepak Nibade2018-09-24
| | | | | | | | | | | | | | | | | | | | | | | | We have a h/w bug on some chips and we need to support below additional HALs to implement a s/w WAR gops.fifo.init_pdb_cache_war() gops.fifo.deinit_pdb_cache_war() gops.fb.apply_pdb_cache_war() Add new API nvgpu_init_mm_pdb_cache_war() to initialize WAR sequence and call this from MM initialization and before setting up rest of the memory management units Deinitialize WAR while cleaning up MM support Add pdb_cache_war_mem member to gk20a to hold all the memory needed for the WAR Bug 200449545 Change-Id: Id2ac0d940c7881c7b0cf396413273c0f329a1a1f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1834901 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ACR code refactorMahantesh Kumbar2018-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | -Created struct nvgpu_acr to hold acr module related member within single struct which are currently spread across multiple structs like nvgpu_pmu, pmu_ops & gk20a. -Created struct hs_flcn_bl struct to hold ACR HS bootloader specific members -Created struct hs_acr to hold ACR ucode specific members like bootloader data using struct hs_flcn_bl, acr type & falcon info on which ACR ucode need to run. -Created acr ops under struct nvgpu_acr to perform ACR specific operation, currently ACR ops were part PMU which caused to have always dependence on PMU even though ACR was not executing on PMU. -Added acr_remove_support ops which will be called as part of gk20a_remove_support() method, earlier acr cleanup was part of pmu remove_support method. -Created define for ACR types, -Ops acr_sw_init() function helps to set ACR properties statically for chip currently in execution & assign ops to point to needed functions as per chip. -Ops acr_sw_init execute at early as nvgpu_init_mm_support calls acr function to alloc blob space. -Created ops to fill bootloader descriptor & to patch WPR info to ACR uocde based on interfaces used to bootstrap ACR ucode. -Created function gm20b_bootstrap_hs_acr() function which is now common HAL for all chips to bootstrap ACR, earlier had 3 different function for gm20b/gp10b, gv11b & for all dgpu based on interface needed. -Removed duplicate code for falcon engine wherever common falcon code can be used. -Removed ACR code dependent on PMU & made changes to use from nvgpu_acr. JIRA NVGPU-1148 Change-Id: I39951d2fc9a0bb7ee6057e0fa06da78045d47590 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1813231 GVS: Gerrit_Virtual_Submit Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: common: MISRA 10.1 boolean fixesAmulya2018-09-19
| | | | | | | | | | | | | | | | Fix violations where a variable of type non-boolean is used as a boolean in gpu/nvgpu/common. JIRA NVGPU-646 Change-Id: I9773d863b715f83ae1772b75d5373f77244bc8ca Signed-off-by: Amulya <Amurthyreddy@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1807132 GVS: Gerrit_Virtual_Submit Tested-by: Amulya Murthyreddy <amurthyreddy@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix MISRA 21.2 violations (nvgpu_mem.c, mm.c)Alex Waterman2018-09-12
| | | | | | | | | | | | | | | | MISRA 21.2 states that we may not use reserved identifiers; since all identifiers beginning with '_' are reserved by libc, the usage of '__' as a prefix is disallowed. Handle the 21.2 fixes for nvgpu_mem.c and mm.c; this deletes the '__' prefixes and slightly renames the __nvgpu_aperture_mask() function since there's a coherent version and a general version. Change-Id: Iee871ad90db3f2622f9099bd9992eb994e0fbf34 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1813623 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move programming of debug page to FBTerje Bergstrom2018-09-10
| | | | | | | | | | | | | | | Debug page was allocated and programmed to HUB MMU in GR code. This introduces a dependency from GR to FB and is anyway the wrong place. Move the code to allocate memory to generic MM code, and the code to program the addresses to FB. Change-Id: Ib6d3c96efde6794cf5e8cd4c908525c85b57c233 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1801423 Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Fix mutex MISRA 17.7 violationsNicolas Benech2018-09-05
| | | | | | | | | | | | | | | | | | MISRA Rule-17.7 requires the return value of all functions to be used. Fix is either to use the return value or change the function to return void. This patch contains fix for calls to nvgpu_mutex_init and improves related error handling. JIRA NVGPU-677 Change-Id: I609fa138520cc7ccfdd5aa0e7fd28c8ca0b3a21c Signed-off-by: Nicolas Benech <nbenech@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1805598 Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Changed enum gmmu_pgsz_gk20a into macrosAmulya2018-08-22
| | | | | | | | | | | | | | | | | | Changed the enum gmmu_pgsz_gk20a into macros and changed all the instances of it. The enum gmmu_pgsz_gk20a was being used in for loops, where it was compared with an integer. This violates MISRA rule 10.4, which only allows arithmetic operations on operands of the same essential type category. Changing this enum into macro will fix this violation. JIRA NVGPU-993 Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3 Signed-off-by: Amulya <Amurthyreddy@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1795593 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Force the PMU VM to use 128K large pages (gm20b)Alex Waterman2018-08-21
| | | | | | | | | | | | | | | Add a WAR for gm20b that allows us to force the PMU VM to use 128K large pages. For some reason setting the small page size to 64K breaks the PMU boot. Unclear why. Bug needs to be filed and fixed. Once fixed this patch can and should be reverted. Bug 200105199 Change-Id: I2b4c9e214e2a6dff33bea18bd2359c33364ba03f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1782769 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: common: mm: Fix MISRA 15.6 violationsSrirangan2018-08-17
| | | | | | | | | | | | | | | | | MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces, including single statement blocks. Fix errors due to single statement if blocks without braces, introducing the braces. JIRA NVGPU-671 Change-Id: Ieeecf719dca9acc1a116d2893637bf770caf4f5b Signed-off-by: Srirangan <smadhavan@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1794241 GVS: Gerrit_Virtual_Submit Reviewed-by: Adeel Raza <araza@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use NVLINK config instead of has_physical_modeAlex Waterman2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This flag - has_physical_mode - doesn't seem to do much other than force the PTE/PDE and inst block addresses to be physical instead of potentially IOMMUed. There is a reason to do this on volta (nvlink not being IOMMU'able being the primary reason) but this flag is too general it seems. The flag was being enabled on all native platforms. The problem is that some page tables (the maxwell small page directories) could be larger than 4KB which meant that the allocation used for them could be potentially discontiguous. Discontiguous page directories obviously is incorrect. This patch deletes the has_physical_mode flag and instead replaces the places where it's checked with a check for nvlink being enabled. Since we _do_ want to program phyiscal PDEs and PTEs for NVLINK devices (regardless of IOMMU status they always access memory by physical address) we need a check for NVLINK state. Bug 200414723 Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1792163 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: disable fb fault buffer in prepare poweroffAparna Das2018-07-20
| | | | | | | | | | | | | | | FB fault buffer is enabled on finalize poweron. Disable the buffer in prepare poweroff. This also eliminates the need to disable the buffer in fault info mem destroy which otherwise accesses GPU registers after these are locked in prepare poweroff. Bug 200427479 Change-Id: I1ca3e6ed4417847731c09b887134f215a2ba331c Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1776387 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Simplify FB hub intr enableTerje Bergstrom2018-07-11
| | | | | | | | | | | | Hard code flags for enabling and disabling FB hub interrupts. JIRA NVGPU-714 Change-Id: I806ef443cb9e27e221d407d633ca91d8fb40d075 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1769853 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: fix fb flush issueseshendra Gadagottu2018-06-25
| | | | | | | | | | | | | | | | | | | | | membar.sys does synchronization with the whole system (GPU and CPU), membar.gl does synchronization within the GPU. In gv11b, fb flush is generating membar.gl instead of membar.sys, which is an issue. To fix this issue. following WAR is used: 1. Use bar1 engine id and bind it to a particular pdb, 2. Then instead of a fb_flush, issue a tlb invalidate of the bar1 pdb. Now allocation of vm for bar1 instance block and bar1 binding is done without check for bar1 support. Only bar1 register mapping is done based on bar1 support enabled. Bug 2112790 Change-Id: I76f43f1178a68f10823d48bc9da55d2bd686dd52 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1750257 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update __get_pte_size() to check IOMMU-abilityAlex Waterman2018-06-22
| | | | | | | | | | | | | | | | | | | | When generating the PTE size for a given mapping the code must consider whether the GPU is being IOMMU'ed. The presence and usage of an IOMMU implies the buffers will appear contiguous to the GPU. Without an IOMMU we cannot assume that and therefor must use small pages regardless of the size of the buffer to be mapped. Bug 2011640 Change-Id: I6c64cbcd8844a7ed855116754b795d949a3003af Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1697891 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: set low_hole to 64K for bar1 vmSourab Gupta2018-01-04
| | | | | | | | | | | | | The patch sets low_hole value to 64K for bar1 vm to align to potential 64KB native page size. JIRA NVGPU-454 Change-Id: I994dfd6824d3a2e8a09433798bb101af88ecb5ca Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1617173 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: remove cde supportSeema Khowala2017-12-26
| | | | | | | | | | | Change-Id: I04df795b20413a2d07a252d77b3eba853890fcae Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1624087 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLsDeepak Nibade2017-11-15
| | | | | | | | | | | | | | | TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using combination of timeslice and interleave levels to decide the priority Hence remove the IOCTLs and all corresponding APIs Jira NVGPU-393 Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1598581 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Introduce queries for big page sizesTerje Bergstrom2017-11-08
| | | | | | | | | | | | | | Introduce query functions for default big page size and available big page sizes. Move initialization of GPU characteristics big page sizes to the GPU characteristics query function. JIRA NVGPU-259 Change-Id: Ie66cc2fbfcd88205593056f8d5010ac2539c8bc2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1593685 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: drop user callback support in CEKonsta Holtta2017-11-07
| | | | | | | | | | | | | | | | | | | | | | | Simplify the copyengine code by deleting support for the ce_event_callback feature that has never been used. Similarly, create a channel without the finish callback to get rid of that Linux dependency, and delete the finish callback function as it now serves no purpose. Delete also the submitted_seq_number and completed_seq_number fields that are only written to. Jira NVGPU-259 Change-Id: I02d15bdcb546f4dd8895a6bfb5130caf88a104e2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1589320 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Silence extra mm debug messagesTerje Bergstrom2017-10-26
| | | | | | | | | | | | common/mm/mm.c uses nvgpu_info() to log debug events. Replace that with nvgpu_dbg_info() to silence the messages. Change-Id: Iaa5b8192287e8392a32ceff2216faf12fd6d09c3 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1585440 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.cAlex Waterman2017-10-24
Move much of the remaining generic MM code to a new common location: common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This mostly consists of init and cleanup code to handle the common MM data structures like the VIDMEM code, address spaces for various engines, etc. A few more indepth changes were made as well. 1. alloc_inst_block() has been added to the MM HAL. This used to be defined directly in the gk20a code but it used a register. As a result, if this register hypothetically changes in the future, it would need to become a HAL anyway. This path preempts that and for now just defines all HALs to use the gk20a version. 2. Rename as much as possible: global functions are, for the most part, prepended with nvgpu (there are a few exceptions which I have yet to decide what to do with). Functions that are static are renamed to be as consistent with their functionality as possible since in some cases function effect and function name have diverged. JIRA NVGPU-30 Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1574499 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>