summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common/linux/vm.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: SGL passthrough implementationSunny He2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | The basic nvgpu_mem_sgl implementation provides support for OS specific scatter-gather list implementations by simply copying them node by node. This is inefficient, taking extra time and memory. This patch implements an nvgpu_mem_sgt struct to act as a header which is inserted at the front of any scatter- gather list implementation. This labels every struct with a set of ops which can be used to interact with the attached scatter gather list. Since nvgpu common code only has to interact with these function pointers, any sgl implementation can be used. Initialization only requires the allocation of a single struct, removing the need to copy or iterate through the sgl being converted. Jira NVGPU-186 Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8 Signed-off-by: Sunny He <suhe@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1541426 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: nvgpu SGL implementationAlex Waterman2017-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last major item preventing the core MM code in the nvgpu driver from being platform agnostic is the usage of Linux scattergather tables and scattergather lists. These data structures are used throughout the mapping code to handle discontiguous DMA allocations and also overloaded to represent VIDMEM allocs. The notion of a scatter gather table is crucial to a HW device that can handle discontiguous DMA. The GPU has a MMU which allows the GPU to do page gathering and present a virtually contiguous buffer to the GPU HW. As a result it makes sense for the GPU driver to use some sort of scatter gather concept so maximize memory usage efficiency. To that end this patch keeps the notion of a scatter gather list but implements it in the nvgpu common code. It is based heavily on the Linux SGL concept. It is a singly linked list of blocks - each representing a chunk of memory. To map or use a DMA allocation SW must iterate over each block in the SGL. This patch implements the most basic level of support for this data structure. There are certainly easy optimizations that could be done to speed up the current implementation. However, this patches' goal is to simply divest the core MM code from any last Linux'isms. Speed and efficiency come next. Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1530867 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Direct GMMU PTE kind controlSami Kiminki2017-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow userspace to control directly the PTE kind for the mappings by supplying NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL for MAP_BUFFER_EX. In particular, in this mode, the userspace will tell the kernel whether the kind is compressible, and if so, what is the incompressible fallback kind. By supplying only the compressible kind, the userspace can require that the map kind will not be demoted to the incompressible fallback kind in case of comptag allocation failure. Add also a GPU characteristics flag NVGPU_GPU_FLAGS_SUPPORT_MAP_DIRECT_KIND_CTRL to signal whether direct kind control is supported. Fix indentation of nvgpu_as_map_buffer_ex_args header comment. Bug 1705731 Change-Id: I317ab474ae53b78eb8fdd31bd6bca0541fcba9a4 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1543462 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Validate buffer_offset argumentAlex Waterman2017-09-11
| | | | | | | | | | | | | | | | Validate the mapping_size argument in the VM mapping IOCTL before attempting to use the argument for anything. Bug 1954931 Change-Id: I81b22dc566c6c6f89e5e62604ce996376b33a343 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547046 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Cast int to u8 explicitlyAlex Waterman2017-09-07
| | | | | | | | | | | | | | | | In the Linux VM code the kind attribute is assigned to a u8 from an int. The code is safe due to the checks in place, however, coverity does not like this so explicitly cast to u8 before assigning kind to kind_v. Coverity ID: 2567919 Bug 200291879 Change-Id: I3860faa99da8ac1a24bd0c3d77c7fbc72207614a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1548712 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove mapping of comptags to user VATerje Bergstrom2017-09-07
| | | | | | | | | | | | | | Remove the kernel feature to map compbit backing store areas to GPU VA. The feature is not used, and the relevant user space code is getting removed, too. Change-Id: I94f8bb9145da872694fdc5d3eb3c1365b2f47945 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1547898 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Replace kref for refcounting in nvgpuDebarshi Dutta2017-08-24
| | | | | | | | | | | | | | | | - added wrapper struct nvgpu_ref over nvgpu_atomic_t - added nvgpu_ref_* APIs to access the above struct JIRA NVGPU-140 Change-Id: Id47f897995dd4721751f7610b6d4d4fbfe4d6b9a Signed-off-by: Debarshi Dutta <ddutta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540899 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Fix potential map failing for vGPUAlex Waterman2017-08-17
| | | | | | | | | | | | | | Ensure that the mapping size passed to the vGPU mapping code is page aligned. The vGPU mapping code returns -EINVAL otherwise. Change-Id: I87a90085882fa0ff538b181a55240468392c4135 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1540423 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix debugfs to disable big pagesThomas Fleury2017-08-07
| | | | | | | | | | | | | | | | | | | | | After setting 'Y' in disable_bigpage, in native SMMU case, we could still see 64K GMMU pages beeing used. Fixed the following: - enforce disable_bigpage in nvgpu_vm_map - update GPU characteristics so that new clients know whether or not big pages are enabled. For instance this may affect how CUDA requests memory mapping. JIRA EVLR-1694 Change-Id: I62841096add3bd798c5c11090054f82c8a2be832 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1532429 Reviewed-by: Richard Zhao <rizhao@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move dev field from gk20a to nvgpu_os_linuxTerje Bergstrom2017-07-07
| | | | | | | | | | | | | Move field "struct device *dev" from struct gk20a to struct nvgpu_os_linux. The field is valid only for Linux. JIRA NVGPU-38 Change-Id: I09286aa3a9c5a2406e5a27c1fbf21b2c515b4dd4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1514162 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use accessor for finding struct deviceTerje Bergstrom2017-06-30
| | | | | | | | | | | | | Use dev_from_gk20a() accessor whenever accessing struct device * from struct gk20a. JIRA NVGPU-38 Change-Id: Ide9fca3a56436c8f62e7872580a766c4c1e2353e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master/r/1507930 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: fix use of untrusted scalar valueDeepak Nibade2017-06-06
| | | | | | | | | | | | | | | | | | | | | | | | Kind value can be passed to API nvgpu_vm_map() from User space (through IOCTL NVGPU_AS_IOCTL_MAP_BUFFER_EX) But kind value is not checked for sane values before storing it in bfr.kind_v And then we use this kind value as array index in gk20a_kind_is_supported() which is incorrect Fix this by ensuring in nvgpu_vm_map() that the kind value is well within range Bug 200291879 Coverity id : 2567923 Coverity id : 2567924 Change-Id: Ic57395018727cbd2260c929581db256e427316c6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1496597 GVS: Gerrit_Virtual_Submit Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove logically dead codeDeepak Nibade2017-06-06
| | | | | | | | | | | | | | | | | In nvgpu_vm_map(), variable "inserted" can never be true in error path Hence remove the logically dead code Bug 200291879 Coverity id : 2567920 Change-Id: I5428f7d52f1d4f815c04d90ea651668dd72ba985 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1495906 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
* gpu: nvgpu: remove duplicate \n from log messagesStephen Warren2017-05-26
| | | | | | | | | | | | | | | nvgpu_log/info/warn/err() internally add a \n to the end of the message. Hence, callers should not include a \n at the end of the message. Doing so results in duplicate \n being printed, which ends up creating empty log messages. Remove the duplicate \n from all err/warn messages. Bug 1928311 Change-Id: I99362c5327f36146f28ba63d4e68181589735c39 Signed-off-by: Stephen Warren <swarren@nvidia.com> Reviewed-on: http://git-master/r/1487232 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Refactor gk20a_vm_alloc_va()Alex Waterman2017-05-24
| | | | | | | | | | | | | | | | This function is an internal function to the VM manager that allocates virtual memory space in the GVA allocator. It is unfortunately used in the vGPU code, though. In any event, this patch cleans up and moves the implementation of these functions into the VM common code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477745 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split vm_area management into vm codeAlex Waterman2017-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | The vm_reserve_va_node struct is essentially a special VM area that can be used for sparse mappings and fixed mappings. The name of this struct is somewhat confusing (as node is typically used for list items). Though this struct is a part of a list it doesn't really make sense to call this a list item since it's much more. Based on that the struct has been renamed to nvgpu_vm_area to capture the actual use of the struct more accurately. This also moves all of the management code of vm areas to a new file devoted solely to vm_area management. Also add a brief overview of the VM architecture. This should help other people follow along the hierachy of ownership and lifetimes in the rather complex MM code. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: If85e1cf868031d0dc265e7bed50b58a2aed2602e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477744 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Split VM implementation outAlex Waterman2017-05-19
This patch begins splitting out the VM implementation from mm_gk20a.c and moves it to common/linux/vm.c and common/mm/vm.c. This split is necessary because the VM code has two portions: first, an interface for the OS specific code to use (i.e userspace mappings), and second, a set of APIs for the driver to use (init, cleanup, etc) which are not OS specific. This is only the beginning of the split - there's still a lot of things that need to be carefully moved around. JIRA NVGPU-12 JIRA NVGPU-30 Change-Id: I3b57cba245d7daf9e4326a143b9c6217e0f28c96 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1477743 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>