summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/semaphore_gk20a.c
Commit message (Collapse)AuthorAge
* Revert "gpu: nvgpu: New allocator for VA space"Terje Bergstrom2015-05-12
| | | | | | | | | | | This reverts commit 2e235ac150fa4af8632c9abf0f109a10973a0bf5. Change-Id: I3aa745152124c2bc09c6c6dc5aeb1084ae7e08a4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/741469 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
* gpu: nvgpu: New allocator for VA spaceAlex Waterman2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a new buddy allocation scheme for the GPU's VA space. The bitmap allocator was using too much memory and is not a scaleable solution as the GPU's address space keeps getting bigger. The buddy allocation scheme is much more memory efficient when the majority of the address space is not allocated. The buddy allocator is not constrained by the notion of a split address space. The bitmap allocator could only manage either small pages or large pages but not both at the same time. Thus the bottom of the address space was for small pages, the top for large pages. Although, that split is not removed quite yet, the new allocator enables that to happen. The buddy allocator is also very scalable. It manages the relatively small comptag space to the enormous GPU VA space and everything in between. This is important since the GPU has lots of different sized spaces that need managing. Currently there are certain limitations. For one the allocator does not handle the fixed allocations from CUDA very well. It can do so but with certain caveats. The PTE page size is always set to small. This means the BA may place other small page allocations in the buddies around the fixed allocation. It does this to avoid having large and small page allocations in the same PDE. Change-Id: I501cd15af03611536490137331d43761c402c7f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/740694 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Per-alloc alignmentTerje Bergstrom2015-03-18
| | | | | | | Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/554185 GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Improve locking in semaphore_gk20a.cLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | Fix some possible race conditions when manipulating the mapping list of semaphore pools. Acquire a reference to the vm in gk20a_semaphore_pool_map, and release that reference in gk20a_semaphore_pool_unmap. Bug 1450122 Change-Id: I204e9c3dffd5162538b93e628d016dc06b3a5fb6 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422160 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a semaphore APIsLauri Peltonen2015-03-18
Add semaphore_gk20a.c/h that implement a new semaphore management API for the gk20a driver. The API introduces two entities, 'semaphore pools' and 'semaphores'. Semaphore pools are memory areas dedicated for hosting one or more semaphores. Typically, one pool equals one 4K page. A semaphore pool is always mapped to the kernel memory, and it can be mapped and unmapped to gpu address spaces using gk20a_semaphore_pool_map/unmap. Semaphores are backed by 16 bytes of memory allocated from a semaphore pool. The value of a semaphore can be 0=acuired or 1=released. When allocated, the semaphores are initialized to the acquired state. They can be released, or their releasing can be waited for by the CPU or GPU. Semaphores are intended to be used only once, and after they are released they should be freed so that the slot within the semaphore pool can be reused. However GPU jobs must take references to the semaphores that they use (similarly as they take references on memory buffers that they use) so that the semaphore backing memory is not reused too soon. Bug 1450122 Bug 1445450 Change-Id: I3fd35f34ca55035decc3e06a9c0ede20c1d48db9 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/374842 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>