summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/mm_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Implement compbits padding for mappingSami Kiminki2015-05-11
| | | | | | | | | | | | | | | | Implement NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS, which adds extra alignment to compbits allocation for safe compbits mapping. Bug 200077571 Change-Id: I3a74ebb81412e4e1e69501debeb9ef4e2056ef1a Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/730763 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/740693 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: SMMU bypassTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | | | | Improve GMMU mapping code to cope with discontiguous buffers. Add debugfs entry that allows bypassing SMMU and disabling big pages. Bug 1605769 Change-Id: I14d32c62293a16ff8c7195377c75a85fa8061083 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/717503 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/737533 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: use 4K hole for pmu VMVijayakumar2015-05-05
| | | | | | | | | | | | | | | | | | | | bug N/A with 128MB hole we are running into PDE errors when 64K big page is used instead of 128k Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Change-Id: Id887b32484e2114a8707e7d534e6ebf5e108b83f Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/733497 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/737532 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Record size of page table levelTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | Record size of each page table level. The size of level 0 depends on size of the address space, and we generally do not support the whole address space. Change-Id: Iab47505af1a641e193d9e98a2246e522813f221a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/729730 Reviewed-by: Automatic_Commit_Validation_User Reviewed-on: http://git-master/r/737531 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com>
* gpu: nvgpu: Use common allocator for patchTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: Idf51831e8be9cabe1ab9122b18317137fde6339f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/721030 Reviewed-on: http://git-master/r/737530 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com>
* gpu: nvgpu: Align VA of compressible bufferTerje Bergstrom2015-05-05
| | | | | | | | | | | | | | | Ensure that the GPU VA for a buffer is aligned correctly if compression is enabled. Bug 1605769 Change-Id: I12566ddd554da7cc9fb41dd553576c534ac96ba8 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/725767 Reviewed-on: http://git-master/r/737529 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Free all page table levelsTerje Bergstrom2015-05-05
| | | | | | | | | | | | | Convert the loop to free page tables into a recursive loop that goes through all levels. Change-Id: I3ab8f021bd8263f2f6dad29b5fbd0e6212c55a86 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/711393 Reviewed-on: http://git-master/r/737528 Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com> Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Check alignment of fixed allocsTerje Bergstrom2015-04-04
| | | | | | | | | | | | When mapping buffer on a fixed address, ensure that the alignment of buffer and the address are compabile. When freeing, retrieve page size from the VA instead of choosing it again. Bug 1605769 Change-Id: I4f73453996cd53a912b6a414caa41563cde28da7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/725764
* gpu: nvgpu: Per-SoC compressible page sizeTerje Bergstrom2015-04-04
| | | | | | | | | | | | Define smallest compressible page size per SoC, and use that for determining if a compressible kind should be downgraded to uncompressed. Bug 1605769 Change-Id: I7c9991ba0ae82fe533641f045e506c0b01a10d8b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/724492
* gpu: nvgpu: Use common VM deinitTerje Bergstrom2015-04-04
| | | | | | Change-Id: Ia9aa9cfaaad9e43820fc47f6620bf01c435dad23 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/718726
* gpu: nvgpu: Allow sparse buffers on small pagesTerje Bergstrom2015-04-04
| | | | | | | | | | | Sparse buffers were allowed only with big pages. That restriction is not necessary, so remove it. Bug 1605769 Change-Id: I92efc0efe80edccead47b47d33fd9a75c921ca9a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/725763
* gpu: nvgpu: Use common allocator for contextTerje Bergstrom2015-04-04
| | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: I10c226e2377aa867a5cf11be61d08a9d67206b1d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/720507
* gpu: nvgpu: Use sg_alloc_table_for_pagesTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Use sg_alloc_table_for_pages to create sg_table for buffers allocated with NO_KERNEL_MAPPING. The old code returned always an sg_table with one chunk. sg_alloc_table_for_pages returns an sg_table which describes the actual physical chunks of the buffer. Bug 1605769 Change-Id: I412c0151d830fa0f53dbbb08ba8cc9ebce6699e3 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/723696
* gpu: nvgpu: Fix error paths in init vmTerje Bergstrom2015-04-04
| | | | | | | | Error paths called the wrong cleanup sections. Change-Id: I603af77bf8e3981c029bcf6d582882e51847f137 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/722949
* gpu: nvgpu: add platform specific get_iova_addr()Deepak Nibade2015-04-04
| | | | | | | | | | | | | | | | | | Add platform specific API pointer (*get_iova_addr)() which can be used to get iova/physical address from given scatterlist and flags Use this API with g->ops.mm.get_iova_addr() instead of calling API gk20a_mm_iova_addr() which makes it platform specific Bug 1605653 Change-Id: I798763db1501bd0b16e84daab68f6093a83caac2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/713089 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix comptag index in traceTerje Bergstrom2015-04-04
| | | | | | | | | Instead of comptag index we were dumping an offset in buffer. Change-Id: Iaa07919c8d87009227556eacbcb6dcbd83954c7d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/718597 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Use common allocator for compbit storeTerje Bergstrom2015-04-04
| | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by using common helpers, and common data structure for storing results of allocations. Bug 1605769 Change-Id: I7c1662b669ed8c86465254f6001e536141051ee5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/720435
* gpu: nvgpu: Helper for no kernel mapping allocTerje Bergstrom2015-04-04
| | | | | | | | | | | | | | | Reduce amount of duplicate code around memory allocation by introducing a variant of allocation helper that does not map the allocated buffer to kernel address space. NO_KERNEL_MAPPING allocations return a struct page **, so store the results of allocation in a new field of mem_desc. Bug 1605769 Change-Id: Ib760b9e6d34b229b04d1fb4f3abf10648670fc69 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/721029
* gpu: nvgpu: Implement common allocator and mem_descTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce mem_desc, which holds all information needed for a buffer. Implement helper functions for allocation and freeing that use this data type. Change-Id: I82c88595d058d4fb8c5c5fbf19d13269e48e422f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/712699
* gpu:nvgpu: add support for unmapped ptesSeshendra Gadagottu2015-04-04
| | | | | | | | | | | | Add support for unmapped ptes during gmmu map. Bug 1587825 Change-Id: I6e42ef58bae70ce29e5b82852f77057855ca9971 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/696507 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support for dumping vpr/wpr infoSeshendra Gadagottu2015-04-04
| | | | | | | | | | | | | | | | Added support for dumping vpr/wpr info for gm20b. This dump info called when ever gk20a_mm_fb_flush is timed-out. Bug 200082817 Change-Id: I21b0372d0e3f976a189c9c428c015165b715bf88 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/711439 (cherry picked from commit b69897d71c8f6119b49ceb8d3273cdb354178cc5) Reviewed-on: http://git-master/r/712675 GVS: Gerrit_Virtual_Submit Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* Revert "gpu: nvgpu: cache cde compbits buf mappings"Konsta Holtta2015-04-04
| | | | | | | | | | This reverts commit 9968badd26490a9d399f526fc57a9defd161dd6c. The commit accidentally introduced some memory leaks. Change-Id: I00d8d4452a152a8a2fe2d90fb949cdfee0de4c69 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/714288 Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
* gpu: nvgpu: fix sparse warnings of static declarationDeepak Nibade2015-04-04
| | | | | | | | | | | | | | | | | | | Fix below sparse warnings by making below APIs static : kernel/drivers/gpu/nvgpu/gk20a/mm_gk20a.c:1795:5: warning: symbol 'update_gmmu_pde_locked' was not declared. Should it be static? kernel/drivers/gpu/nvgpu/gk20a/mm_gk20a.c:1841:5: warning: symbol 'update_gmmu_pte_locked' was not declared. Should it be static? Bug 200067946 Change-Id: I8158aaf503378b176cfd5cc129db9557803003c1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/713024 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: setup chip specific mm hw initSeshendra Gadagottu2015-04-04
| | | | | | | | | | | | Add support for setting-up mm hw init per soc. Bug 1587825 Change-Id: Ie5c5e49a767cfb14e3dbbb6902349284cd3dca95 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/681784 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cache cde compbits buf mappingsKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | | don't unmap compbits_buf explicitly from system vm early but store it in the dmabuf's private data, and unmap it later when all user mappings to that buffer have been disappeared. Bug 1546619 Change-Id: I333235a0ea74c48503608afac31f5e9f1eb4b99b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/661949 (cherry picked from commit ed2177e25d9e5facfb38786b818330798a14b9bb) Reviewed-on: http://git-master/r/661835 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Refactor page mapping codeTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Pass always the directory structure to mm functions instead of pointers to members to it. Also split update_gmmu_ptes_locked() into smaller functions, and turn the hard coded MMU levels (PDE, PTE) into run-time parameters. Change-Id: I315ef7aebbea1e61156705361f2e2a63b5fb7bf1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/672485 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: add hw perfmon buffer mapping ioctlsKonsta Holtta2015-04-04
| | | | | | | | | | | | | | Map/unmap buffers for HWPM and deal with its instance block, the minimum work required to run the HWPM via regops from userspace. Bug 1517458 Bug 1573150 Change-Id: If14086a88b54bf434843d7c2fee8a9113023a3b0 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/673689 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Do not return timedout in emulationTerje Bergstrom2015-04-04
| | | | | | | | | We have infinite timeouts for loops in emulation. Some functions with the loops still return error if we exceed the original retry count. Change-Id: I1f9ddbfc0acd9f30f6bd49d9e748d8d8fbefa154 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/709491
* gpu: nvgpu: Unify PDE & PTE structsTerje Bergstrom2015-04-04
| | | | | | | | | | Introduce a new struct gk20a_mm_entry. Allocate and store PDE and PTE arrays using the same structure. Always pass pointer to this struct when possible between functions in memory code. Change-Id: Ia4a2a6abdac9ab7ba522dafbf73fc3a3d5355c5f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/696414
* gpu: nvgpu: TLB invalidate after map/unmapTerje Bergstrom2015-04-04
| | | | | | | | | | | Always invalidate TLB after mapping or unmapping, and remove the delayed TLB invalidate. Change-Id: I6df3c5c1fcca59f0f9e3f911168cb2f913c42815 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/696413 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Fix NULL instead of intergerAmit Sharma (SW-TEGRA)2015-04-04
| | | | | | | | | | | | | | Fixed the following sparse warning using proper NULL instead of '0': - mm_gk20a.c:1301: warning: Using plain integer as NULL pointer Bug 200067946 Change-Id: Idd84f541711682bf097bb474049d523a5bb01ae2 Signed-off-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com> Reviewed-on: http://git-master/r/682242 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Use busy looping on memory opsTerje Bergstrom2015-04-04
| | | | | | | | | | | | Use busy looping on L2 and TLB maintenance operations. This speeds them up by an order of magnitude. Add also trace points to measure performance for memory ops and interrupt processing. Change-Id: Ic4a8525d3d946b2b8f57b4b8ddcfc61605619399 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/681640
* gpu: nvgpu: use vm param to vm_map/unmap_bufferKonsta Holtta2015-04-04
| | | | | | | | | | | | | | Pass vm instead of as share to the userspace buffer mapping functions, since they need to be called also from other places than just the AS device ioctls, and as share is specific to them. Bug 1573150 Change-Id: I994872f23ea7b1582361f3f4fabbd64b4786419c Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/674020 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use gk20a_free_inst_block in remove_vmKonsta Holtta2015-04-04
| | | | | | | | | | Use the common instance block freeing function when removing vm. Change-Id: I1dfaaceb57e01d0a1359ce5742ed55d81dff10ed Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/672033 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Set compression page per SoCTerje Bergstrom2015-04-04
| | | | | | | | | | | | | Compression page size varies depending on architecture. Make it 129kB on gk20a and gm20b. Also export some common functions from gm20b. Bug 1592495 Change-Id: Ifb1c5b15d25fa961dab097021080055fc385fecd Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/673790
* gpu:nvgpu: add bar2 aperture supportSeshendra Gadagottu2015-04-04
| | | | | | | | | | Bug 1587825 Change-Id: I884c6b268aabb04b4990713395ebedf92410e02a Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/659239 Tested-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: unify instance block initializationKonsta Holtta2015-04-04
| | | | | | | | | | | Create gk20a_init_inst_block() to reduce reg write clutter when initializing instance blocks, which is done in several places. Change-Id: Idcb8b604851a849e0bb6abce5743c9f4cbf98033 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/672434 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Change va_node free behaviorAlex Waterman2015-04-04
| | | | | | | | | | | | | | | | | | Decrement the ref count on all mapped_buffers belonging to a va_node when a va_node is freed. This prevents userspace from leaking some mapped_buffers in some cases. This does prevent userspace from keeping a buffer around after freeing a space allocation if the buffer in question is not otherwise ref counted. Not sure if this is a bad thing for userspace or not. Bug 1600686 Change-Id: I659ccbda5935d44086fd367bd2110f7d0f066194 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/676629 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add SMMU bit only if SMMU enabledTerje Bergstrom2015-04-04
| | | | | | | | | | If SMMU is disabled, we should not add the SMMU bit to addresses. Change-Id: I6dd82e18b63474fb487d21f421ef06467551595b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/673250 Reviewed-by: Adeel Raza <araza@nvidia.com> Tested-by: Adeel Raza <araza@nvidia.com>
* gpu: nvgpu: make larger address space workAlex Waterman2015-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement several fixes for allowing the GVA address space to grow to larger than 32GB and increase the address space to 128GB. o Implement dynamic allocation of PDE backing pages. The memory to store the PDE entries was hard coded to 1 page. Now the number of pages necessary is computed dynamically based on the size of the address space and the size of large pages. o Fix an arithmetic problem in the gm20b sparse texture code that caused large address spaces to be truncated when sparse PDEs/PTEs were being filled in. This caused a kernel panic when freeing the address space since a lot of the backing PTE memory was not allocated. o Change the address space split for large and small pages. Small pages now occupy the bottom 16GB of the address space. Large pages are used for the rest of the address space. Now, with a 128GB address space, there are 112GB of large page GVA available. This patch exists to allow large (16GB) sparse textures to be allocated without running into lack of memory issues and kernel panics. Bug 1574267 Change-Id: I7c59ee54bd573dfc53b58c346156df37a85dfc22 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/671204 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: unify instance block creationKonsta Holtta2015-04-04
| | | | | | | | | | | Reduce copypaste code in instance block allocation and deletion with functions purposed for that. Change-Id: I2c8ae6a317ac89e2c857dde4296cb4316b8aaafe Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/668698 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix list_add_tail in dmabuf stateKonsta Holtta2015-04-04
| | | | | | | | | | | | | | | | | Fix a memory leak: add the newly created state to the dmabuf priv's state list, instead of the other way around. Bug 1594784 Bug 200064154 Change-Id: I939746a254bb8bf4d06de7fcecba06c191da665f Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/668758 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: Fix/HACK for v3.18Dan Willemsen2015-03-18
| | | | Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* gpu: nvgpu: Remove gk20a sparse texture & PTE freeingTerje Bergstrom2015-03-18
| | | | | | | | | | | | Remove support for gk20a sparse textures. We're using implementation from user space, so gk20a code is never invoked. Also removes ref_cnt for PTEs, so we never free PTEs when unmapping pages, but only at VM delete time. Change-Id: I04d7d43d9bff23ee46fd0570ad189faece35dd14 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/663294
* gpu: nvgpu: Generic mem_desc & allocationTerje Bergstrom2015-03-18
| | | | | | | | | | | | Make mem_desc a generic container for buffers. Add functions for allocating and mapping buffers to an address space which store their data in mem_desc. Change-Id: I031643442c6fd41f5e7222fe9b7bfcaf9b784db5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/660908 GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: detect iommu'ability dynamicallyHiroshi Doyu2015-03-18
| | | | | | | | | | | | A device can be iommu'able whenever it's registered so that this patch detects its iommu'ability dynamically. Bug 1577389 Change-Id: I8ea20e5dd997fc1a399f517c17783323f238ecc3 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/606019 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Physical page bits to be per chipTerje Bergstrom2015-03-18
| | | | | | | | | | Retrieve number of physical page bits based on chip. Bug 1567274 Change-Id: I5a0f6a66be37f2cf720d66b5bdb2b704cd992234 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/601700
* gpu: nvgpu: l2 invalidate/flush for off devicesKonsta Holtta2015-03-18
| | | | | | | | | | | | | When doing l2 invalidate or l2 flush, first check if the hw is powered on. If it is not, nothing is done, as there are no hardware registers available. As a side-effect, this may race so that the hardware stays unrailgated. Change-Id: I8bdbfcee3545355435d4ae01476188eb1b8b8817 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/594441 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Per-alloc alignmentTerje Bergstrom2015-03-18
| | | | | | | Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/554185 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: sanitize gk20a_vm_alloc_share()Sami Kiminki2015-03-18
| | | | | | | | | | | | | | Add sanity check for big_page_size parameter to avoid invoking gk20a_init_vm() with a bogus big page size, potentially hitting a BUG_ON there. Also, reorganize the code a bit to avoid memory leak in case of bogus big page size, and properly forward the return value from gk20a_init_vm(). Change-Id: I4eeada75415d2e9539b5e8859099cce35cd86db3 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/594469 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>