summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/mm_gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: track pending bytes for vidmem clearsKonsta Holtta2016-10-14
| | | | | | | | | | | | | | | | | Change clears_pending to bytes_pending and track accordingly the number of bytes to be freed instead of the number of buffers. This, atomically combined with the amount of space in the allocator, is the total amount of free memory available. Bug 200233138 Change-Id: Ibbb4e80a32728781ba19a74307d8a8ac1a4d7431 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1231422 (cherry picked from commit 025e765f312c253b201ecf2dbbe0f4972fe1d4bc) Reviewed-on: http://git-master/r/1235957 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: mark vidmem.cleared volatileKonsta Holtta2016-10-14
| | | | | | | | | | | | | | | | The boolean flag mm_gk20a.vidmem.cleared is shared across threads, so mark it volatile to prevent compiler from wrongly optimizing accesses to it. Jira DNVGPU-84 Change-Id: I1fe66b26966685d3f74ed95ba53b198f810231b9 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1233016 (cherry picked from commit dc6c9db56ea8a5f55f28f97fdfc3c1ac60d8b195) Reviewed-on: http://git-master/r/1235317 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: compact pte buffersKonsta Holtta2016-10-13
| | | | | | | | | | | | | | | | | | | | | | The lowest page table level may hold very few entries for mappings of large pages, but a new page is allocated for each list of entries at the lowest level, wasting memory and performance. Compact these so that the new "allocation" of ptes is appended at the end of the previous allocation, if there is space. 4 KB page is still the smallest size requested from the allocator; any possible overhead in the allocator (e.g., internally allocating big pages only) is not taken into account. Bug 1736604 Change-Id: I03fb795cbc06c869fcf5f1b92def89a04583ee83 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1221841 (cherry picked from commit fa92017ed48e1d5f48c1a12c512641c6ce9924af) Reviewed-on: http://git-master/r/1234996 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make sure vidmem is cleared only onceKonsta Holtta2016-10-12
| | | | | | | | | | | | | | | | | | Protect the initial vidmem zeroing performed during the first userspace alloc with a mutex, so that it blocks next concurrent users and is run only once. Otherwise, multiple clears could end up running in parallel, so that the next ones corrupt memory allocated by the thread that has finished earlier and advanced to allocate and use memory. Jira DNVGPU-84 Change-Id: If497749abf481b230835250191d011c4a9d1483b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1232461 (cherry picked from commit 79435a68e6d2713b78acdb0ec6f77cfd78651d7f) Reviewed-on: http://git-master/r/1234990 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: test free user vidmem atomicallyKonsta Holtta2016-09-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | An empty list of soon-to-be-freed userspace vidmem buffers is not enough to safely assume that an allocation may succeed or not if tried again, because removal from the list and actually marking the memory freed is not atomic. Fix this by using an atomic counter for the number of pending frees (so that it's still safe to first remove from the job list and then perform the free), and making allocation attempts combined with a test of pending frees atomic. This still does not guarantee that there is memory available (as the actual amount of pending memory in bytes plus the current free amount isn't computed), but removes the race that produces false negatives in case a single program expects repeated frees and allocs to succeed. Bug 1809939 Change-Id: I6a92da2e21cbf3f886b727000c924d56f35ce55b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1217078 (cherry picked from commit 83c1f1e70dccd92fdd4481132cf5b6717760d432) Reviewed-on: http://git-master/r/1220545 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add new API to get base address for sysmem/vidmem buffersDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | Add new API gk20a_mem_get_base_addr() which will return vidmem base address in case of vidmem and IOVA address in case of sysmem Even though vidmem allocations are non-contiguous, this API is useful (and should only be used) for allocations with one chunk (e.g. page tables) Also, since page tables could either reside in sysmem or vidmem, use this API to get address of page tables Jira DNVGPU-20 Change-Id: Ie04af9ca7bfccfec1a8a8e4be2c507cef5cef8e1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1206403 (cherry picked from commit a8c74dc188878f2948fa1e0e47bf1837fba6c5e0) Reviewed-on: http://git-master/r/1210957 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: track allocator and user for each memDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | Store allocator pointer for each mem_desc This pointer should be used while freeing the mem instead of assuming a common allocator Add flag user_mem to mem_desc which will be set only in case of User vidmem allocations We will delay free of mem in worker only if this flag is set on mem. Otherwise, we will free it immediately This is needed so that all kernel allocations can work with both sysmem and vidmem Jira DNVGPU-84 Change-Id: Ib9a9209b164bc56b7880448f86bd6d42b324cc86 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1203099 (cherry picked from commit 8f0b0122f36a0b6f1932fa9a98d7eb03b1f623d1) Reviewed-on: http://git-master/r/1210953 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clear vidmem buffers in workerDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We clear buffers allocated in vidmem in buffer free path. But to clear buffers, we need to submit CE jobs and this could cause issues/races if free called from critical path Hence solve this by moving buffer clear/free to a worker gk20a_gmmu_free_attr_vid() will now just put mem_desc into a list and schedule a worker And worker thread will traverse the list and clear/free the allocations In struct gk20a_vidmem_buf, mem variable is statically allocated. But since we delay free of mem, convert this variable into a pointer and allocate it dynamically Since we delay free of vidmem memory, it is now possible to face OOM conditions during allocations. Hence while allocating block until we have sufficient memory available with an upper limit of 1S Jira DNVGPU-84 Change-Id: I7925590644afae50b6fc04c6e1e43bbaa1c220fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1201346 (cherry picked from commit b4dec4a30de2431369d677acca00e420f8e581a5) Reviewed-on: http://git-master/r/1210950 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: clear whole vidmem on first allocationDeepak Nibade2016-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently clear vidmem pages in gk20a_gmmu_alloc_attr_vid_at() i.e. allocation path for each buffer But since buffer allocation path could be latency critical, clear whole vidmem first and before first User allcation in gk20a_vidmem_buf_alloc() And then clear buffer pages while releasing the buffer In this way, we can ensure that vidmem pages are already cleared during buffer allocation path At a later stage, clearing of pages can be removed from free path and moved to a separate worker as well At this point, first allocation has overhead of clearing whole vidmem which takes about 380mS and this should improve once clocks are raised. Also, this is one time larency, and subsequent allocations should not have any overhead for clearing at all Add API gk20a_vidmem_clear_all() to clear whole vidmem We have WPR buffers allocated during boot up and at fixed address in vidmem. To prevent overwriting to these buffers in gk20a_vidmem_clear_all(), clear whole vidmem except for the bootstrap allocator carveout Add new API gk20a_gmmu_clear_vidmem_mem() to clear one mem_desc Jira DNVGPU-84 Change-Id: I5661700585c6241a6a1ddeb5b7c068d3d2aed4b3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1194301 (cherry picked from commit 950ab61a04290ea405968d8b0d03e3bd044ce83d) Reviewed-on: http://git-master/r/1193158 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add a bootstrap vidmem allocatorAlex Waterman2016-09-01
| | | | | | | | | | | | | | | | Add an allocator for allocating vidmem before the CE has had a chance to be initialized (and clear the rest of vidmem). Jira DNVGPU-84 Change-Id: I5166607a712b3a6eb4c2906b8c7d002c68a6567b Signed-off-by: Alex Waterman <alexw@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1197204 (cherry picked from commit b4e68e84eedd952637b2332d8dc73a9090d6d62e) Reviewed-on: http://git-master/r/1210949 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add aperture and size to map loggingKonsta Holtta2016-07-22
| | | | | | | | | | | | Include the buffer aperture flag (sysmem/vidmem/invalid) and the size of the buffer and of the mapping in logging strings during gmmu map path. Change-Id: Ie4c46bf9cb5db79b738571029d46ce8cbfc63f99 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1189492 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: add vidmem allocation ioctlKonsta Holtta2016-07-21
| | | | | | | | | | | | | | | | | | Add NVGPU_GPU_IOCTL_ALLOC_VIDMEM to the ctrl fd for letting userspace allocate on-board GPU memory (aka vidmem). The allocations are returned as dmabuf fds. Also, report the amount of local video memory in the gpu characteristics. Jira DNVGPU-19 Jira DNVGPU-38 Change-Id: I28e361d31bb630b96d06bb1c86d022d91c7592bc Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1181152 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: add vidmem managerKonsta Holtta2016-07-21
| | | | | | | | | | | | | | | Use the nvgpu-internal buddy allocator for video memory allocations, instead of nvmap. This allows better integration for copyengine, BAR1 mapping to userspace, etc. Jira DNVGPU-38 Change-Id: I9fd67b76cd39721e4cd8e525ad0ed76f497e8b99 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1181151 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Add nvgpu infra to allow kernel to create privileged CE channelsLakshmanan M2016-07-20
| | | | | | | | | | | | | Added interface to allow kernel to create privileged CE channels for page migration and clearing support between sysmem and videmem. JIRA DNVGPU-53 Change-Id: I3e18d18403809c9e64fa45d40b6c4e3844992506 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1173085 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: Support multiple types of allocatorsAlex Waterman2016-07-19
| | | | | | | | | | | | | | | | | | | | | | Support multiple types of allocation backends. Currently there is only one allocator implementation available: a buddy allocator. Buddy allocators have certain limitations though. For one the allocator requires metadata to be allocated from the kernel's system memory. This causes a given buddy allocation to potentially sleep on a kmalloc() call. This patch has been created so that a new backend can be created which will avoid any dynamic system memory management routines from being called. Bug 1781897 Change-Id: I98d6c8402c049942f13fee69c6901a166f177f65 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1172115 GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: handle map/unmap for vidmem gmmu pagesKonsta Holtta2016-07-14
| | | | | | | | | | | | | | | If page tables are allocated from vidmem, cpu cache flushing doesn't make sense, so skip it. Unify also map/unmap actions if the pages are not mapped. Jira DNVGPU-20 Change-Id: I36b22749aab99a7bae26c869075f8073eab0f860 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1178830 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: use vidmem by default in gmmu_alloc variantsKonsta Holtta2016-07-08
| | | | | | | | | | | | | | | | | | For devices that have vidmem available, use the vidmem allocator in gk20a_gmmu_alloc{,attr,_map,_map_attr}. For others, use sysmem. Because all of the buffers haven't been tested to work in vidmem yet, rename calls to gk20a_gmmu_alloc{,attr,_map,_map_attr} to have _sys at the end to declare explicitly that vidmem is used. Enabling vidmem for each now is a matter of removing "_sys" from the function call. Jira DNVGPU-18 Change-Id: Ibe42f67eff2c2b68c36582e978ace419dc815dc5 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1176805 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support in-kernel vidmem mappingsKonsta Holtta2016-07-06
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: I46cf87e27229123016727339b9349d5e2c835b3e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169308 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: ngpu: add support for vidmem in page tablesKonsta Holtta2016-07-05
| | | | | | | | | | | | | | Modify page table updates to take an aperture flag (up until gk20a_locked_gmmu_map()), don't hard-assume sysmem and propagate it to hardware. Jira DNVGPU-76 Change-Id: Ifcb22900c96db993068edd110e09368f72b06f69 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169307 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: initial support for vidmem aperturesKonsta Holtta2016-07-05
| | | | | | | | | | | | | | add gk20a_aperture_mask() for memory target selection now that buffers can actually be allocated from vidmem, and use it in all cases that have a mem_desc available. Jira DNVGPU-76 Change-Id: I4353cdc6e1e79488f0875581cfaf2a5cfb8c976a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169306 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Revamp semaphore supportAlex Waterman2016-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revamp the support the nvgpu driver has for semaphores. The original problem with nvgpu's semaphore support is that it required a SW based wait for every semaphore release. This was because for every fence that gk20a_channel_semaphore_wait_fd() waited on a new semaphore was created. This semaphore would then get released by SW when the fence signaled. This meant that for every release there was necessarily a sync_fence_wait_async() call which could block. The latency of this SW wait was enough to cause massive degredation in performance. To fix this a fast path was implemented. When a fence is passed to gk20a_channel_semaphore_wait_fd() that is backed by a GPU semaphore a semaphore acquire is directly used to block the GPU. No longer is a sync_fence_wait_async() performed nor is there an extra semaphore created. To implement this fast path the semaphore memory had to be shared between channels. Previously since a new semaphore was created every time through gk20a_channel_semaphore_wait_fd() what address space a semaphore was mapped into was irrelevant. However, when using the fast path a sempahore may be released on one address space but acquired in another. Sharing the semaphore memory was done by making a fixed GPU mapping in all channels. This mapping points to the semaphore memory (the so called semaphore sea). This global fixed mapping is read-only to make sure no semaphores can be incremented (i.e released) by a malicious channel. Each channel then gets a RW mapping of it's own semaphore. This way a channel may only acquire other channel's semaphores but may both acquire and release its own semaphore. The gk20a fence code was updated to allow introspection of the GPU backed fences. This allows detection of when the fast path can be taken. If the fast path cannot be used (for example when a fence is sync-pt backed) the original slow path is still present. This gets used when the GPU needs to wait on an event from something which only understands how to use sync-pts. Bug 1732449 JIRA DNVGPU-12 Change-Id: Ic0fea74994da5819a771deac726bb0d47a33c2de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1133792 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use gpfifo_mem via gk20a_mem_{rd,wr}Konsta Holtta2016-06-20
| | | | | | | | | | | | | | | | | | Use gk20a_mem_*() accessors for gpfifo memory in work submission instead of direct cpu accesses in order to support other apertures than sysmem. The gpfifo memory is still allocated from sysmem for dgpus too. Split the copying of priv_cmds and the main gpfifo to be submitted in gk20a_submit_channel_gpfifo() into separate functions. JIRA DNVGPU-21 Change-Id: If271ca8e7e34235f00d31855dbccf77c0008e10b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1145923 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add vidmem allocation APIKonsta Holtta2016-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add in-nvgpu APIs for allocating and freeing mem_descs in video memory. Changes for gmmu tables etc. will be added in upcoming changes. Video memory is allocated via nvmap by initially registering the aperture size to it and binding it to a struct device, and then going via the usual dma alloc. This API allows also fixed-address allocations, meant for reserving special memory areas at boot. The aperture registration is skipped completely if vidmem isn't found for the particular device. gk20a_gmmu_alloc_attr() still uses sysmem, and the unmap/free paths select internally the correct path by the mem_desc's aperture. Video memory allocation is off by default, and can be turned on with CONFIG_GK20A_VIDMEM. JIRA DNVGPU-16 Change-Id: I77eae5ea90cbed6f4b5db0da86c5f70ddf2a34f9 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1157216 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: optimize mem_desc accessor loopsKonsta Holtta2016-06-13
| | | | | | | | | | | | | | Instead of going via gk20a_mem_{wr,rd}32() on each iteration, do direct memcpy/memset with sysmem, and minimize the enter/exit overhead with vidmem. JIRA DNVGPU-23 Change-Id: I5437e35f8393a746777a40636c1e9b5d93ced1f6 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1159524 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove dead priv_cmdbuf codeAlex Waterman2016-06-13
| | | | | | | | | | | | | | Remove the gp_get and gp_put pointers from the priv_cmdbuf code. These pointers appear to track the position of th the priv_cmdbuf in the gp_fifo. However, these pointers are not used for anything nor are they needed for anything in the future. This code appears to be a relic left over from the past. Change-Id: Ibed1a6d51fa0cac12c5e0429760e8e2f611fc899 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1161859 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: detect vidmem configuration from HWKonsta Holtta2016-06-08
| | | | | | | | | | | | | Read video memory size from hardware during initialization for devices that support it. JIRA DNVGPU-14 Change-Id: If190f2d89f7148520ee274ca674f972987c8056d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1157215 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cache whole bar0_window for mem accessesKonsta Holtta2016-06-07
| | | | | | | | | | | | | | Save the whole bar0 window register that encodes also the target aperture (vid/sys mem) instead of only the base address that could overlap between the two. JIRA DNVGPU-23 Change-Id: I2ccbea0e1f7c7310c1ca6b158afafe8fd974a615 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1159523 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add PRAMIN support for mem accessorsKonsta Holtta2016-05-24
| | | | | | | | | | | | | | | | | | | | To support vidmem, implement a way to access buffers via the PRAMIN window instead of just kernel-mapped sysmem buffers for iGPU as of now. Depending on the buffer aperture, choose between the two access types in the buffer memory accessor functions. vmap()/vunmap() pairs are no-ops for buffers that can't be cpu-mapped. Two uses of DMA_ATTR_READ_ONLY are removed in the ucode loading path to support writing to them too via the indirection in addition to cpu. JIRA DNVGPU-23 Change-Id: I282dba6741c6b8224bc12e69c1fb3936bde7e6ed Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1141314 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use mem_desc in priv_cmd_entryKonsta Holtta2016-05-18
| | | | | | | | | | | | | | | | | Replace the plain cpu pointer accesses with gk20a_mem_wr32(), and use a reference to the underlying mem_desc (within priv_cmd_queue) paired with an offset, for buffer aperture flexibility. JIRA DNVGPU-21 JIRA DNVGPU-23 Change-Id: I317672c94bb682bb895f9ed3e8116729c8bb7f4b Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1145922 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: refactor gk20a_mem_{wr,rd} for vidmemKonsta Holtta2016-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support vidmem, pass g and mem_desc to the buffer memory accessor functions. This allows the functions to select the memory access method based on the buffer aperture instead of using the cpu pointer directly (like until now). The selection and aperture support will be in another patch; this patch only refactors these accessors, but keeps the underlying functionality as-is. gk20a_mem_{rd,wr}32() work as previously; add also gk20a_mem_{rd,wr}() for byte-indexed accesses, gk20a_mem_{rd,wr}_n() for memcpy()-like functionality, and gk20a_memset() for filling buffers with a constant. The 8 and 16 bit accessor functions are removed. vmap()/vunmap() pairs are abstracted to gk20a_mem_{begin,end}() to support other types of mappings or conditions where mapping the buffer is unnecessary or different. Several function arguments that would access these buffers are also changed to take a mem_desc instead of a plain cpu pointer. Some relevant occasions are changed to use the accessor functions instead of cpu pointers without them (e.g., memcpying to and from), but the majority of direct accesses will be adjusted later, when the buffers are moved to support vidmem. JIRA DNVGPU-23 Change-Id: I3dd22e14290c4ab742d42e2dd327ebeb5cd3f25a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1121143 Reviewed-by: Ken Adams <kadams@nvidia.com> Tested-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: separate IOCTL to set preemption modeDeepak Nibade2016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add separate IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE to allow setting preemption modes from UMD Define preemption modes in nvgpu.h and use them everywhere Remove mode definitions from mm_gk20a.h Also, we support setting only one preemption mode in a channel But it is possible to have multiple preemption modes (one from graphics and one from compute) set simultaneously Hence, update struct gr_ctx_desc to include two separate preemption modes (graphics_preempt_mode and compute_preempt_mode) API NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE also supports setting two separate preemption modes i.e. one for graphics and one for compute Make necessary changes in code to support two preemption modes Bug 1646259 Change-Id: Ia1dea19e609ba8cc0de2f39ab6c0c4cd6b0a752c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1131805 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: adapt gk20a_mm_entry for mem_descKonsta Holtta2016-05-03
| | | | | | | | | | | | | | | For upcoming vidmem refactor, replace struct gk20a_mm_entry's contents identical to struct mem_desc, with a struct mem_desc member. This makes it possible to use the page table buffers like the others too. JIRA DNVGPU-23 JIRA DNVGPU-20 Change-Id: I714ee5dcb33c27aaee932e8e3ac367e84610b102 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1139694 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a_gmmu_fixed_map() functionAlex Waterman2016-04-29
| | | | | | | | | | | | | | | | Add a function to allow the kernel to do fixed mappings. Necessary for the semaphore functionality since there needs to be a common address in each VM for the semaphores. Bug 1732449 JIRA DNVGPU-12 Change-Id: I2b451db2d3cb3c003d951f7b0ffc87f6c91db7dc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1133789 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Program NISO sysmem flush addrTerje Bergstrom2016-04-25
| | | | | | | | | | | | Program sysmem flush address to prevent random accesses of address 0. Change-Id: I886170395f036805f02e0bce7ecd3c8c46b921df Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1129216 GVS: Gerrit_Virtual_Submit Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Use sysmem aperture for SoC memoryTerje Bergstrom2016-04-15
| | | | | | | | | In Tegra GPU, SoC memory has to be accessed as vidmem. In discrete GPU, it has to be accessed as sysmem. Change-Id: I4efe71b54a9a32f0bf1f02ec4016ed74405a14c5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120468
* debugfs: Pass bool pointer to debugfs_create_bool()Alex Van Brunt2016-04-14
| | | | | | | | | | | | | | Port the change 621a5f7ad9cd1ce7933f1d302067cbd58354173c from kernel.org to the nvgpu driver. bug 200187033 Change-Id: I7d742f614161d9d4ed59c4216d7c730d57ef4116 Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1118397 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Support GPUs with no physical modeTerje Bergstrom2016-04-13
| | | | | | | | | | | Support GPUs which cannot choose between SMMU and physical addressing. Change-Id: If3256fa1bc795a84d039ad3aa63ebdccf5cc0afb Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120469 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: vgpu: virtualized SMPC/HWPM ctx switchPeter Daifuku2016-04-08
| | | | | | | | | | | | | | Add support for SMPC and HWPM context switching when virtualized Bug 1648200 JIRASW EVLR-219 JIRASW EVLR-253 Change-Id: I80a1613eaad87d8510f00d9aef001400d642ecdf Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1122034 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use device instead of platform_deviceTerje Bergstrom2016-04-08
| | | | | | | | | Use struct device instead of struct platform_device wherever possible. This allows adding other bus types later. Change-Id: I1657287a68d85a542cdbdd8a00d1902c3d6e00ed Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120466
* gpu: nvgpu: support for hwpm context switchingPeter Daifuku2016-04-07
| | | | | | | | | | | | Add support for hwpm context switching Bug 1648200 Change-Id: I482899bf165cd2ef24bb8617be16df01218e462f Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1120450 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix compilation with CONFIG_DEBUG_FS disabledVishal Annapurve2016-03-29
| | | | | | | | | | | | | This change fixes issues with kernel compilation when CONFIG_DEBUG_FS is disabled. Bug 1737085 Change-Id: I74719674d07ae071e3df99b0dda249b54173f40b Signed-off-by: Vishal Annapurve <vannapurve@nvidia.com> Reviewed-on: http://git-master/r/1024167 GVS: Gerrit_Virtual_Submit Reviewed-by: Sandeep Trasi <strasi@nvidia.com>
* gpu: nvgpu: split address space for fixed allocsAlex Waterman2016-03-25
| | | | | | | | | | | | | | | | | | | | | | Allow a special address space node to be split out from the user adress space or fixed allocations. A debugfs node, /d/<gpu>/separate_fixed_allocs Controls this feature. To enable it: # echo <SPLIT_ADDR> > /d/<gpu>/separate_fixed_allocs Where <SPLIT_ADDR> is the address to do the split on in the GVA address range. This will cause the split to be made in all subsequent address space ranges that get created until it is turned off. To turn this off just echo 0x0 into the same debugfs node. Change-Id: I21a3f051c635a90a6bfa8deae53a54db400876f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1030303 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: bitmap allocator for comptagsKonsta Holtta2016-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Restore comptags to be bitmap-allocated, like they were before we had the buddy allocator. The new buddy allocator introduced by e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff (originally 6ab2e0c49cb79ca68d2f83f1d4610783d2eaa79b) is fine for the big VAs, but unsuitable for the small compbit store. This commit reverts partially the combination of the above commit and also one after it, 86fc7ec9a05999bea8de320840b962db3ee11410, that fixed a bug which is not present when using a bitmap. With a bitmap allocator, pruning the extra allocation necessary for user-mapped mode is possible, so that is also restored. The original generic bitmap allocator is not restored; instead, a comptag-only allocator is introduced. Bug 200145635 Change-Id: I87f3a911826a801124cfd21e44857dfab1c3f378 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/837180 (cherry picked from commit 5a504aeb54f3e89e6561932971158a397157b3f2) Reviewed-on: http://git-master/r/839742 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Control comptagline assignment from kernelTerje Bergstrom2016-01-05
| | | | | | | | | | | | | | | | | | | | | On Maxwell comptaglines are assigned per 128k, but preferred big page size for graphics is 64k. Bit 16 of GPU VA is used for determining which half of comptagline is used. This creates problems if user space wants to map a page multiple times and to arbitrary GPU VA. In one mapping the page might be mapped to lower half of 128k comptagline, and in another mapping the page might be mapped to upper half. Turn on mode where MSB of comptagline in PTE is used instead of bit 16 for determining the comptagline lower/upper half selection. Bug 1704834 Change-Id: If87e8f6ac0fc9c5624e80fa1ba2ceeb02781355b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/924322 Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: rework private command buffer free pathDeepak Nibade2015-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently allocate private command buffers (wait_cmd and incr_cmd) before submitting the job but we never free them explicitly. When private command queue of the channel is full, we then try to recycle/remove free command buffers. But this recycling happens during submit path, and hence that particular submit path takes much longer Rework this as below : - add reference of command buffers to job structure - when job completes, free the command buffers explicitly - remove the code to recycle buffers since it should not be needed now Note that command buffers need to be freed in order of their allocation. Ensure this with error print before freeing the command buffer entry Bug 200141116 Bug 1698667 Change-Id: Id4b69429d7ad966307e0d122a71ad55076684307 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/827638 (cherry picked from commit c6cefd69b71c9b70d6df5343b13dfcfb3fa99598) Reviewed-on: http://git-master/r/835802 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: User-space managed address space supportSami Kiminki2015-11-18
| | | | | | | | | | | | | | | | | | | | | | | | | | Implement NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED, which enables creating userspace-managed GPU address spaces. When an address space is marked as userspace-managed, the following changes are in effect: - Only fixed-address mappings are allowed. - VA space allocation for fixed-address mappings is not required, except to mark space as sparse. - Maps and unmaps are always immediate. In particular, the mapping ref increments at kickoffs and decrements at job completion are skipped. Bug 1614735 Bug 1623949 Bug 1660392 Change-Id: I834fe19b3f65e9b02c268952383eddee0e465759 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/738558 Reviewed-on: http://git-master/r/833253 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement NVGPU_GPU_IOCTL_GET_BUFFER_INFOSami Kiminki2015-11-17
| | | | | | | | | | | | | | | | | | Implement NVGPU_GPU_IOCTL_GET_BUFFER_INFO. The new IOCTL can be used to identify buffers and retrieve their sizes. This allows the userspace to be agnostic to the dmabuf implementation, as the generic dmabuf fd interface does not have a reliable way for buffer identification. Bug 1614735 Bug 1623949 Bug 1660392 Change-Id: Ic3dd0a9385c9852778110ccb80636dd6f4f36208 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: http://git-master/r/822845 Reviewed-on: http://git-master/r/833252 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use a separate big vm for cdeKonsta Holtta2015-11-11
| | | | | | | | | | | | | Allocate a separate VM for CDE channels instead of using the system (PMU) vm, and make it much bigger than the PMU's to fit the maximum number of CDE channels there. Bug 1566740 Change-Id: I4f487c40c9ec79cc9ffb880b0ecd3f47eb450336 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/815149 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Do not use G_ELPG_FLUSHTerje Bergstrom2015-11-10
| | | | | | | | | | | | | G_ELPG_FLUSH is protected in some chips. Use L2 flush operations instead. Bug 1698618 Change-Id: I984a8ace8bcd0ad2d4a4e2d63af75a342bdeb75a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/828656 (cherry picked from commit ba9075fa43975112a221d37d246f0b8f5af40fab) Reviewed-on: http://git-master/r/829415