summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gm20b/mm_gm20b.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Wait for BAR1 bindTerje Bergstrom2016-04-15
| | | | | | | | | Wait for BAR1 bind to complete before continuing. The register to wait exists Maxwell onwards. Change-Id: Ie3736033fdb748c5da8d7a6085ad6d63acaf41f5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1123941
* gpu: nvgpu: Use device instead of platform_deviceTerje Bergstrom2016-04-08
| | | | | | | | | Use struct device instead of struct platform_device wherever possible. This allows adding other bus types later. Change-Id: I1657287a68d85a542cdbdd8a00d1902c3d6e00ed Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120466
* gpu: nvgpu: Replace CONFIG_PM_RUNTIME with CONFIG_PMSumit Singh2016-03-15
| | | | | | | | | | | | | | | After commit b2b49ccbdd54 (PM: Kconfig: Set PM_RUNTIME if PM_SLEEP is selected) PM_RUNTIME is always set if PM is set, so #ifdef blocks depending on CONFIG_PM_RUNTIME may now be changed to depend on CONFIG_PM. Replace CONFIG_PM_RUNTIME with CONFIG_PM everywhere under drivers/gpu/nvgpu/. JIRA TPM-704 Change-Id: I23965838ff6ec77829076cd834e87641fb68e268 Signed-off-by: Sumit Singh <sumsingh@nvidia.com>
* gpu: nvgpu: abstract set mmu debug modeRichard Zhao2015-11-23
| | | | | | | | | | | | | | | | | Add new operaton g->ops.mm.set_debug_mode and let other places that set debug mode call this callback. It's preparing for adding vgpu set mmu debug mode hook. JIRA VFND-1005 Bug 1594604 Change-Id: I1d227a0c0f96adb0035ae16ae1f4fbfa739bf0a7 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/833497 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: Do not use G_ELPG_FLUSHTerje Bergstrom2015-11-10
| | | | | | | | | | | | | G_ELPG_FLUSH is protected in some chips. Use L2 flush operations instead. Bug 1698618 Change-Id: I984a8ace8bcd0ad2d4a4e2d63af75a342bdeb75a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/828656 (cherry picked from commit ba9075fa43975112a221d37d246f0b8f5af40fab) Reviewed-on: http://git-master/r/829415
* gpu: nvgpu: add platform specific get_iova_addr()Deepak Nibade2015-04-04
| | | | | | | | | | | | | | | | | | Add platform specific API pointer (*get_iova_addr)() which can be used to get iova/physical address from given scatterlist and flags Use this API with g->ops.mm.get_iova_addr() instead of calling API gk20a_mm_iova_addr() which makes it platform specific Bug 1605653 Change-Id: I798763db1501bd0b16e84daab68f6093a83caac2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/713089 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: setup chip specific mm hw initSeshendra Gadagottu2015-04-04
| | | | | | | | | | | | Add support for setting-up mm hw init per soc. Bug 1587825 Change-Id: Ie5c5e49a767cfb14e3dbbb6902349284cd3dca95 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/681784 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor page mapping codeTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Pass always the directory structure to mm functions instead of pointers to members to it. Also split update_gmmu_ptes_locked() into smaller functions, and turn the hard coded MMU levels (PDE, PTE) into run-time parameters. Change-Id: I315ef7aebbea1e61156705361f2e2a63b5fb7bf1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/672485 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Unify PDE & PTE structsTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce a new struct gk20a_mm_entry. Allocate and store PDE and PTE arrays using the same structure. Always pass pointer to this struct when possible between functions in memory code. Change-Id: Ia4a2a6abdac9ab7ba522dafbf73fc3a3d5355c5f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/696414
* gpu: nvgpu: TLB invalidate after map/unmapTerje Bergstrom2015-04-04
| | | | | | | | | | | Always invalidate TLB after mapping or unmapping, and remove the delayed TLB invalidate. Change-Id: I6df3c5c1fcca59f0f9e3f911168cb2f913c42815 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/696413 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: make larger address space workAlex Waterman2015-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement several fixes for allowing the GVA address space to grow to larger than 32GB and increase the address space to 128GB. o Implement dynamic allocation of PDE backing pages. The memory to store the PDE entries was hard coded to 1 page. Now the number of pages necessary is computed dynamically based on the size of the address space and the size of large pages. o Fix an arithmetic problem in the gm20b sparse texture code that caused large address spaces to be truncated when sparse PDEs/PTEs were being filled in. This caused a kernel panic when freeing the address space since a lot of the backing PTE memory was not allocated. o Change the address space split for large and small pages. Small pages now occupy the bottom 16GB of the address space. Large pages are used for the rest of the address space. Now, with a 128GB address space, there are 112GB of large page GVA available. This patch exists to allow large (16GB) sparse textures to be allocated without running into lack of memory issues and kernel panics. Bug 1574267 Change-Id: I7c59ee54bd573dfc53b58c346156df37a85dfc22 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/671204 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove gk20a sparse texture & PTE freeingTerje Bergstrom2015-03-18
| | | | | | | | | | | | Remove support for gk20a sparse textures. We're using implementation from user space, so gk20a code is never invoked. Also removes ref_cnt for PTEs, so we never free PTEs when unmapping pages, but only at VM delete time. Change-Id: I04d7d43d9bff23ee46fd0570ad189faece35dd14 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/663294
* gpu: nvgpu: fix sparse warningsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | Fix below sparse warnings : kernel/drivers/gpu/nvgpu/gm20b/mm_gm20b.c:283:5: warning: symbol 'gm20b_mm_get_big_page_sizes' was not declared. Should it be static? kernel/drivers/gpu/nvgpu/gm20b/clk_gm20b.c:1055:12: warning: symbol 'gm20b_clk_get' was not declared. Should it be static? Bug 200032218 Change-Id: Id199b4b1853b3c933c91509fd550c7b5538cff29 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/660133 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Physical page bits to be per chipTerje Bergstrom2015-03-18
| | | | | | | | | | Retrieve number of physical page bits based on chip. Bug 1567274 Change-Id: I5a0f6a66be37f2cf720d66b5bdb2b704cd992234 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/601700
* gpu: nvgpu: GPU characteristics additionsSami Kiminki2015-03-18
| | | | | | | | | | | | | | | | | Add the following info into GPU characteristics: available big page sizes, support indicators for sync fence fds and cycle stats, gpc mask, SM version, SM SPA version and warp count, and IOCTL interface levels. Also, add new IOCTL to fetch TPC masks. Bug 1551769 Bug 1558186 Change-Id: I8a47d882645f29c7bf0c8f74334ebf47240e41de Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/562904 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix sparse warningsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | Fix below sparse warnings : warning: Using plain integer as NULL pointer warning: symbol <variable/funcion> was not declared. Should it be static? warning: Initializer entry defined twice Also, remove dead functions Bug 1573254 Change-Id: I29d71ecc01c841233cf6b26c9088ca8874773469 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/593363 Reviewed-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Implement 64k large page supportTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | Implement support for 64kB large page size. Add an API to create an address space via IOCTL so that we can accept flags, and assign one flag for enabling 64kB large page size. Also adds APIs to set per-context large page size. This is possible only on Maxwell, so return error if caller tries to set large page size on Kepler. Default large page size is still 128kB. Change-Id: I20b51c8f6d4a984acae8411ace3de9000c78e82f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Common VM initializerTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | Merge initialization code from gk20a_init_system_vm(), gk20a_init_bar1_vm() and gk20a_vm_alloc_share() into gk20a_init_vm(). Remove redundant page size data, and move the page size fields to be VM specific. Bug 1558739 Bug 1560370 Change-Id: I4557d9e04d65ccb48fe1f2b116dd1bfa74cae98e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gm20b: Fix build warningsTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | Fix build warnings by removing the unused variables, functions and duplicated code. Enable -Werror to prevent new build warnings. Change-Id: Ifd73344a6e12497e6dca595ac7a6edd7ca698f88 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/497374 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use pgsz_idx instead of page_sizeTerje Bergstrom2015-03-18
| | | | | | | | | | | | | Alloc space writes the page size to a field that requires pgsz_idx. That can cause corruption in internal kernel structures. Clear_sparse treated a parameter as page size instead of index. Bug 1549451 Change-Id: I73ce17b99aae6865056facce72d2ab9ca8b3f81d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/495692
* gpu: nvgpu: Clear PTE ref after freeingTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | When clearing sparse buffers, pte->ref must be cleared once the PTE is freed. Bug 1549451 Change-Id: Ie7d3e438ef2c43cbcf893709ae50a67823bf0c9c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/494670 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: gm20b: Fix pm refs in VPR info fetchArto Merilainen2015-03-18
| | | | | | | | | | | | | | | | | The code took reference to gk20a by using gk20a_busy_noresume(). This function takes pm runtime reference only to the GPU, however, the code dropped the reference by calling gk20a_idle() which also drops the reference to the platform dependencies (host1x). This patch modifies gm20b_mm_mmu_vpr_info_fetch_wait() to drop only the GPU reference. Change-Id: Ied59381fa302452356768ed59e8ad9af18284e3d Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/482721 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support gk20a virtualizationAingara Paramakuru2015-03-18
| | | | | | | | | | | | | The nvgpu driver now supports using the Tegra graphics virtualization interfaces to support gk20a in a virtualized environment. Bug 1509608 Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676 Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/440122 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gm20b: use gpc_mmu to check debug modeKevin Huang2015-03-18
| | | | | | | | | | Bug 1534793 Change-Id: I8a4c35914b58dd13a7c10c668de9d4662d947d8c Signed-off-by: Kevin Huang <kevinh@nvidia.com> Reviewed-on: http://git-master/r/441377 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: clear sparse in space freeKevin Huang2015-03-18
| | | | | | | | | | | | | | Gk20a unmaps the addresses binding to dummy page to clear sparse. On Gm20b, we need to free the allocated page table entry for sparse memory. Bug 1538384 Change-Id: Ie2409ab016c29f42c5f7d97dd7287b093b47f9df Signed-off-by: Kevin Huang <kevinh@nvidia.com> Reviewed-on: http://git-master/r/448645 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* nvgpu: Host side changes to support HS modeSupriya2015-03-18
| | | | | | | | | | | | | | | | | | | GM20B changes in PMU boot sequence to support booting in HS mode and LS mode Bug 1509680 Change-Id: I2832eda0efe17dd5e3a8f11dd06e7d4da267be70 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/423140 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: add support to Maxwell sparse textureKevin Huang2015-03-18
Bug 1442531 Change-Id: Ie927cca905b2ea9811417e7a1fdfdf9d48f015e2 Signed-off-by: Kevin Huang <kevinh@nvidia.com>