summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/semaphore_gk20a.c
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Improve locking in semaphore_gk20a.cLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | Fix some possible race conditions when manipulating the mapping list of semaphore pools. Acquire a reference to the vm in gk20a_semaphore_pool_map, and release that reference in gk20a_semaphore_pool_unmap. Bug 1450122 Change-Id: I204e9c3dffd5162538b93e628d016dc06b3a5fb6 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422160 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a semaphore APIsLauri Peltonen2015-03-18
Add semaphore_gk20a.c/h that implement a new semaphore management API for the gk20a driver. The API introduces two entities, 'semaphore pools' and 'semaphores'. Semaphore pools are memory areas dedicated for hosting one or more semaphores. Typically, one pool equals one 4K page. A semaphore pool is always mapped to the kernel memory, and it can be mapped and unmapped to gpu address spaces using gk20a_semaphore_pool_map/unmap. Semaphores are backed by 16 bytes of memory allocated from a semaphore pool. The value of a semaphore can be 0=acuired or 1=released. When allocated, the semaphores are initialized to the acquired state. They can be released, or their releasing can be waited for by the CPU or GPU. Semaphores are intended to be used only once, and after they are released they should be freed so that the slot within the semaphore pool can be reused. However GPU jobs must take references to the semaphores that they use (similarly as they take references on memory buffers that they use) so that the semaphore backing memory is not reused too soon. Bug 1450122 Bug 1445450 Change-Id: I3fd35f34ca55035decc3e06a9c0ede20c1d48db9 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/374842 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>