| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A use-after-free scenario is possible where one thread in
gk20a_free_error_notifiers() is trying to free the error
notifier and another thread in gk20a_set_error_notifier()
is still using the error notifier
Fix this by introducing mutex error_notifier_mutex for
error notifier accesses
Take mutex in gk20a_free_error_notifiers() and in
gk20a_set_error_notifier() before accessing notifier
In gk20a_init_error_notifier(), set the pointer
ch->error_notifier_ref inside the mutex and only
after notifier is completely initialized
Bug 1824788
Change-Id: I47e1ab57d54f391799f5a0999840b663fd34585f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1233988
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix below sparse warning by including nvgpu_common.h
from nvgpu_common.c
nvgpu/drivers/gpu/nvgpu/nvgpu_common.c:105:5: warning: symbol
'nvgpu_probe' was not declared. Should it be static?
Bug 200088648
Change-Id: I81f20a5be1c16ba33d6c17a6c72836107878d1df
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1233960
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Suppress error message when nvgpu tries to load VBIOS overlay, but
one is not found. This situation is normal. This is done by moving
gk20a_request_firmware() to be nvgpu generic function
nvgpu_request_firmware(), and adding a NO_WARN flag to it.
Introduce also a NO_SOC flag to suppress attempt to load firmware
from SoC specific directory in addition to the chip specific
directory. Use it for dGPU firmware files.
Bug 200236777
Change-Id: I0294d3308f029a6a6d3c2effa579d5f69a91e418
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1223840
(cherry picked from commit cca44c3f010f15918cdd2259c15170ba1917828a)
Reviewed-on: http://git-master/r/1233353
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-72
JIRA DNVGPU-73
Change-Id: I5932779f6913b55692f69fac692a1a66a9912fc4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1216562
(cherry picked from commit d44be7714afa1f4257a81799c326b453da3d2d5a)
Reviewed-on: http://git-master/r/1233350
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-118
move vidmem allocation for pmuboardobj to cmd specific
functions and do a copy of data from pmu incase of
getstatus. fixes for getstatus boardobjgrp implementation
and added one #define for rail id to make getstatus of vf table
more meaningful
Change-Id: I366a022c13e51e823116ce2354794babc48981a2
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1209841
(cherry picked from commit 8c12599f801decc77bbc1acfd1937dfefb21f35e)
Reviewed-on: http://git-master/r/1231839
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the known dGPU SKUs to the PCIe device id table, and remove the
wildcard ANY_GPU_ID wildcard. This makes nvgpu to not try to probe on unknown
GPUs.
JIRA DNVGPU-72
Change-Id: Ie32c3137e9fa89a9e6dcf1e578c0b9d7339d7e75
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1219129
(cherry picked from commit 5c56088fbf8cb815d8be3355ecbb597fb7bfc795)
Reviewed-on: http://git-master/r/1231042
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug 1809509
latest pmu now returns information about 3 queues
only. nvgpu pmu driver still support 5 queues to
be compatible with older firmware. handling this
properly
Change-Id: I4bc166712465f4b52537c97e6d254760c59e0d16
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1215533
(cherry picked from commit c7428c031a095b2d42512b7a8a0a9d818290e376)
Reviewed-on: http://git-master/r/1231040
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support loading VBIOS from file system instead of EEPROM.
JIRA DNVGPU-134
Change-Id: I4c68dc4ab7c1138e8cf2fa9146de5473274491b4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1211614
(cherry picked from commit d4e35e60ba513e471fe5a85ed570e7ec06c88f06)
Reviewed-on: http://git-master/r/1229492
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
| |
bug 200067946
Change-Id: I50de1cda004c08b5a4af3fb06a3970c35197f419
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1230622
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move wmb() before the loop in pramin-accessed batch writes and use
writel_relaxed() directly, instead of calling gk20a_writel() that would
do wmb() on each iteration separately.
Jira DNVGPU-24
Change-Id: I4c1375a819266727f97e2f109d3132b5b0974ac6
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1213600
(cherry picked from commit 79e3e38e0c5384ababfd55b8e6cd9723eb8f7b66)
Reviewed-on: http://git-master/r/1184343
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Clocks params update as per r370
JIRA DNVGPU-116
Change-Id: Id709d6b1e49d717a00cec72c9607f28a27f86c1e
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1212840
(cherry picked from commit f3a47a7daf9b3803565958436d6c7475510e641a)
Reviewed-on: http://git-master/r/1229672
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update PMU version to support r370
- flcn_bl_dmem_desc_v1 params update to
support PMU bootloader
- PMU_UNIT_CLK value update
JIRA DNVGPU-116
Change-Id: Ic4096e4a5ea55ca6b7c72670061e55b4719e0895
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1212834
(cherry picked from commit 32257231733303b0859230719f3857ad2d9d8820)
Reviewed-on: http://git-master/r/1227289
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cancel timeout handler before cleaning up the list of jobs. This
prevents a race that makes timeout handler access already freed
jobs.
Bug 1814108
Change-Id: I37cfc408cb1f96b8b0e62db1ca8067a2ae43dd0e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1221698
(cherry picked from commit be0d146cba8dc2b1bdb7c53ae39188a4bf0ca019)
Reviewed-on: http://git-master/r/1223843
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both gk20a_request_firmware() and its callers wrote an error when a
file could not be found. Remove the error in
gk20a_request_firmware().
JIRA DNVGPU-143
Change-Id: I74cb6a6774762732d7702f1eadbeef19dcb9a85e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1211612
(cherry picked from commit 818364189036c6732b19682debb63a033c6a6c2a)
Reviewed-on: http://git-master/r/1229491
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the priv_cmd buffer is full, return EAGAIN to userspace,
so it may retry to submit ioctl.
Bug 1795076
Change-Id: I0752d52b677aaf915e8e472bec6140e14c885589
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1213586
(cherry picked from commit fc6b23559a839620accd5bbd2957e69310b87a5b)
Reviewed-on: http://git-master/r/1229488
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the size of the golden_ctx_image is large,
the allocation may intermittently fail when using
kzalloc. Since we don't need physically continguous
memory, use vzalloc instead.
Bug 200231436
Change-Id: Ic2fb31dea94c8721832dc257334608e1fc283943
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1207172
(cherry picked from commit 994a7b162ec74518ae0f50dfb5ac197e44019992)
Reviewed-on: http://git-master/r/1229472
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It attaches the neccesary namemap structures to the clock struct so we can enumerate the clock domains in the debugfs code in nvgpu-t18x.
the other is to add an accessor for the fields.
JIRA DNVGPU-98
Change-Id: I6e5c6e763b2b88daa1995f4136a9a7b33ea25b17
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1199083
Reviewed-on: http://git-master/r/1204016
(cherry picked from commit b9d95a45791b93ddc010d1aeddbe798d2a9705d4)
Reviewed-on: http://git-master/r/1227910
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not call load prod callbacks that are set to NULL.
Bug 1799537
Change-Id: Ie951fb71fa8eacd10623abcd058f32db59004c2e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1208467
(cherry picked from commit c020e16adfa2b2bc2e3e8d0c63527a6089c59906)
Reviewed-on: http://git-master/r/1227268
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-45
Change-Id: I237ce81e31b036c05c82d46eea8694ffe1c2e3df
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Signed-off-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1205849
(cherry picked from commit 9a4006f76b75a8ad525e7aa5ad1f609aaae49126)
Reviewed-on: http://git-master/r/1227256
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switch from buddy to bitmap allocator for PMU DMEM. PMU DMEM is small
and we cannot allocate it sparsely.
JIRA DNVGPU-85
Change-Id: Ia23d25abab593fb0d92a2329d9878da7a72bc6ca
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1203974
(cherry picked from commit 78216c9d5f0974f94ce0f818db854ef08211d4e4)
Reviewed-on: http://git-master/r/1222682
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible to allocate larger size than user requested
e.g. If we allocate at 64k granularity, and user asks for
32k buffer, we end up allocating 64k chunk.
User still asks to map the buffer with size 32k and
hence we reserve mapping addresses only for 32k
But due to bug in mapping in update_gmmu_ptes_locked()
we end up creating mappings considering size of 64k
and corrupt some mappings
Fix this by considering min(chunk->length, map_size) while
mapping address range for a chunk
Also, map_size will be zero once we map all requested
address range. So bail out from the loop if map_size
is zero
Bug 1805064
Change-Id: I125d3ce261684dce7e679f9cb39198664f8937c4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1217755
(cherry picked from commit 3ee1c6bc0718fb8dd9a28a37eff43a2872bdd5c0)
Reviewed-on: http://git-master/r/1221775
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Post GR_SEMAPHORE_WRITE_AWAKEN event on semaphore write awken
interrupt for channel.
BUG 200223530
Change-Id: I19eb61578d1c562be84e20ecaff9fb3bc9ace516
Signed-off-by: Nikhil Mahale <nmahale@nvidia.com>
Reviewed-on: http://git-master/r/1193726
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add error checks to prevent loading a random image as VBIOS.
JIRA DNVGPU-134
Change-Id: Ia3efd0ed743b6a7661707612828a795802e96cd9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1211613
(cherry picked from commit ffa2b6df3f11d6c63b1e4337bd7d989932bdfce8)
Reviewed-on: http://git-master/r/1223844
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To handle chip specific runlist entry size and structure,
add and implement relevant functional pointers.
Bug 1735760
Change-Id: I01f3ea78fb21d9fe30c82ba51ef24d7d95ebf90a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1214473
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expose PCI device id info for PCI devices.
Bug 1643487
Change-Id: Ib0e3295b33c2343d99553a5c48e3f67d419d207b
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/1214946
(cherry picked from commit a6e23a315a094f1df1f7db8e4307a10d06f28411)
Reviewed-on: http://git-master/r/1216336
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the rmb() location of the rmb() in the buddy and bitmap allocators.
The previous fix was not quite right. The rmb() needs to be after the
init value is read so that any subsequent reads occur after the init
value is read. If this is not done then subsequent reads could be loaded
before the value of init is checked and possibly be invalid.
Bug 1811382
Change-Id: I6d1fa25cc16c5e19fd2769d489878afa2f8e3e35
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221061
(cherry picked from commit f2ddb6c56e554c39733c8fc9ae870dfc12e47b44)
Reviewed-on: http://git-master/r/1223458
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Putting the wmb() before the write only ensures that any previous
writes are done. But this doesn't really do anything for the
writel_relaxed(). The point of the wmb() here is to ensure that
the write performed by the writel_relaxed() is actually done
before proceeding.
Bug 1811382
Change-Id: I7250ea074b8548c899acfd34d816de466cf53b6f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1216434
(cherry picked from commit c9aa02dc61138615d971902fe58dc6a113cdf00a)
Reviewed-on: http://git-master/r/1223457
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make sure that all writes have been commited before allowing
the variable storing the init status to be seen as non-zero.
Pair this with a read memory barrier where the check for the
status is done.
Bug 1799159
Change-Id: I938dffdfc2f39187b0dad11b7e283381560961b4
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1211523
(cherry picked from commit 6dd673d24a93c05834c9d96d2022b359ced5b73b)
Reviewed-on: http://git-master/r/1223456
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a carveout for the WPR region in the VIDMEM.
Jira DNVGPU-84
Change-Id: I191ecc3bb317ae3af6b56f5970194e646c513964
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1208527
(cherry picked from commit 7edf74d7468dcff1f01cbd901d83aa0e32602f0e)
Reviewed-on: http://git-master/r/1223455
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement carveout support by just calling through to the buddy
allocator's carveout support.
Jira DNVGPU-84
Change-Id: I1940873394a4cbff0152f1b6c9c4fd659e0076e1
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1203392
(cherry picked from commit 499ee0407bf525e161a14cfb8bbbc101ac934329)
Reviewed-on: http://git-master/r/1223454
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement carveout support in the buddy allocator so that the WPR space in
the VIDMEM can be carved out. This is needed since the buddy allocator is
used internally by the page allocator which is what manages the VIDMEM space.
Jira DNVGPU-84
Change-Id: I864faa7e20fca5547cc3a8f85f1bc4c36af53ee0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1203391
(cherry picked from commit a8a5fd265a8ae33093d144cd6ec5222e93280a0f)
Reviewed-on: http://git-master/r/1223453
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow allocators to have regions of memory (carveouts) reserved
from allocation.
Bug 1799159
Jira DNVGPU-84
Change-Id: Id103e60ed1a6e63c433d1cf610c9f15227595750
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1200060
(cherry picked from commit 95f7c16b6fb49a570139a3a51828a9bca1c0abc8)
Reviewed-on: http://git-master/r/1223452
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement a lockless allocator for fixed-size data
structures.
Bug 1795076
Change-Id: I70a5f52cbdb4452cc0fd9a8edf26735be29ede57
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1213211
(cherry picked from commit e4bff7da0f39c8f4b5691169c02e482bc9d4166e)
Reviewed-on: http://git-master/r/1223246
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We previously used to wait on the last_submit fence
before disabling a channel. Since this part of the
code is no longer exercised, we can remove this
tracking.
Bug 1795076
Change-Id: I54ba2ebaf48772aa775654c0fb4ab614a7167969
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1206585
Reviewed-by: Automatic_Commit_Validation_User
(cherry picked from commit e4e236f2b487b8cfa31f7afd29fad3c97de5f844)
Reviewed-on: http://git-master/r/1209166
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Free the hw_sema before releasing a channel's address space
binding when freeing a channel. Since the semaphore pool
can be freed after releasing the address space, we need
to do this earlier on.
Bug 1795076
Change-Id: Ic8ae7510af7be862feb6694130c6ce8fc0b8e411
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1208071
(cherry picked from commit 82a52fb6789b1c9361c1567f082ca36135287294)
Reviewed-on: http://git-master/r/1209165
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change improves the aggressive sync creation
& destruction logic to avoid lock contention in
the submit path. It does the following:
1) Removes the global sync destruction (channel)
threshold, and adds a per-platform parameter.
2) Avoids lock contention in the clean-up/submit
path when aggressive sync destruction is disabled.
3) Creates sync object at gpfifo
allocation time (as long as we are not in aggressive
sync destroy mode), to enable faster first submits
Bug 1795076
Change-Id: Ifdb680100b08d00f37338063355bb2123ceb1b9f
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1202425
(cherry picked from commit ac0978711943a59c6f28c98c76b10759e0bff610)
Reviewed-on: http://git-master/r/1202427
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Submit job-tracking is necessary for any of the following
conditions:
- pre- or post-fence functionality
- channel wdt
- GPU rail-gating
- buffer refcounting
If none of the conditions are met, then job tracking is not
required and a fast submit can be done (ie. only need to
write out userspace GPFIFO entries and update GP_PUT).
Bug 1795076
Change-Id: If94d195e3a18a6b623e167829d291ec98a7a43a1
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/1203511
(cherry picked from commit 13d7cfe94559dc52cb0bba7f9e48848e0858be81)
Reviewed-on: http://git-master/r/1223066
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Irrespective generic MSG ptr, pick up data that PMU sends
as response to commands
JIRA DNVGPU-85
Change-Id: I97dd2abcd9e2a7ad7bfe1270f9905a5b69e196f3
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1205119
Reviewed-on: http://git-master/r/1205447
(cherry picked from commit b1130124157acb2cfb4d04a0dd6ee8c4c0c830e5)
Reviewed-on: http://git-master/r/1222684
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Offset needs to be calculated for individual queues
from init ack content
Bug 200229814
Signed-off-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Change-Id: I93276b9cbab48e7fc42fb6c2a8edf382afb82f71
Reviewed-on: http://git-master/r/1202291
(cherry picked from commit 0e0abd478a13a5163e2b83d07307ed7136c4920e)
Reviewed-on: http://git-master/r/1205442
(cherry picked from commit f402cc2a9d0be05b5b95d5d0acbfc66f3b78b309)
Reviewed-on: http://git-master/r/1222683
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update payload interface to support mclk
- Call mclk after gr init complete
JIRA DNVGPU-85
Change-Id: I14c5c6cb438f1a7d56d96daa0fafc09d6abef46b
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1205461
(cherry picked from commit f1bf1ec946aaacae40ecb405341eb2e169cf5754)
Reviewed-on: http://git-master/r/1217989
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vidmem support for PMU. Introduces pmu_surface, which abstracts
the memory used, and allocator helpers for both sysmem and vidmem.
JIRA DNVGPU-85
Change-Id: I61ce137c7007d82010e900759bf8acaf31fba286
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1196518
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1203125
(cherry picked from commit 665f5748108c50fe0c9b4c1486b9d74869477668)
Reviewed-on: http://git-master/r/1217628
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't attempt to use get_iova_addr() on vidmem which does not make
sense.
Jira DNVGPU-20
Change-Id: Ibfe1516b88ed8b60b8134c330e6b0569d52cbb5b
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1217077
(cherry picked from commit c912f0349d24fde033dbcd9874948ff14ad89a43)
Reviewed-on: http://git-master/r/1221264
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a check that was backwards for signaled sync_fences. This would cause
the code to not wait on some sync_fences that had not already signaled and
wait on other fences that had signaled.
Bug 1787348
Reviewed-on: http://git-master/r/1204710
(cherry picked from commit 75b94bb30f79c3a7a9992773dc8a93b507121006)
Change-Id: I00b0f8a373a9954a5ad9ab31aff6423e91574153
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221044
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the submit synchornization code into it's own function. This should
help keep the submit code path a little more readable and understandable.
Bug 1732449
Reviewed-on: http://git-master/r/1203833
(cherry picked from commit f931c65c166aeca3b8fe2996dba4ea5133febc5a)
Change-Id: I4111252d242a4dbffe7f9c31e397a27b66403efc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221043
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Greatly simplify and make more robust the gpu semaphore detection
in sync_fences. Instead of using a magic number use the parent
timeline of sync_pts.
This will also work with multi-GPU setups using nvgpu since the
timeline ops pointer will be the same across all instances of
nvgpu.
Bug 1732449
Reviewed-on: http://git-master/r/1203834
(cherry picked from commit 66eeb577eae5d10741fd15f3659e843c70792cd6)
Change-Id: I4c6619d70b5531e2676e18d1330724e8f8b9bcb3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221042
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only create sync-fences in the semaphore synchronization path
when they are actually needed (i.e requested by userspace).
Bug 1795076
Reviewed-on: http://git-master/r/1201564
(cherry picked from commit dc52d424a839e6c064c02b7f02905dd6a59a50af)
Change-Id: Ieac6aef415678d4ea982683a955897c64959436e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221041
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add new API set_vidmem_page_alloc() which sets BIT(0)
in sg_dma_address() only for vidmem allocation
Add and use new API get_vidmem_page_alloc() which
receives scatterlist and returns pointer to vidmem
allocation i.e. struct gk20a_page_alloc *alloc
In this API, check if BIT(0) is set or not in
sg_dma_address() before converting it to allocation
address
In gk20a_mm_smmu_vaddr_translate(), ensure that
the address is pure IOVA address by verifying that
BIT(0) is not set in that address
Jira DNVGPU-22
Change-Id: Ib53ff4b63ac59a8d870bc01d0af59839c6143334
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1216142
(cherry picked from commit 03c9fbdaa40746dc43335cd8fbe9f97ef2ef50c9)
Reviewed-on: http://git-master/r/1219705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use gk20a_gmmu_alloc() in gk20a_alloc_inst_block() so that
we always try to allocate all inst blocks in vidmem first
Also use common API gk20a_alloc_inst_block() in
channel_gk20a_alloc_inst() as well
Jira DNVGPU-22
Change-Id: I6c47c19aae1189d7e57f47a51d21a32e2df53c1f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1216140
(cherry picked from commit 6c84961a50eb8a8b080b2db08f87e58143f5a6e8)
Reviewed-on: http://git-master/r/1219704
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While programming ucode's inst block in API
gr_gk20a_load_falcon_bind_instblk(), use gk20a_aperture_mask()
to select target address (i.e. if address is in sysmem or
vidmem) based on aperture
Also add target accessors for gr_fecs_new_ctx and
gr_fecs_arb_ctx_ptr
Jira DNVGPU-22
Change-Id: I88198080f188b349a4448a229dff8416a6a18073
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1216139
(cherry picked from commit 42bc14110df17400dd655bc994dc9e61c73048b1)
Reviewed-on: http://git-master/r/1219703
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test for size, not cpu_va, to check for buffer validity before
attempting to free.
Jira DNVGPU-22
Change-Id: I416c0963bf4e1819aa2f8d200c69a2d989524f83
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1215575
(cherry picked from commit ce0077feca55bfb5665c82972598a075abd8f2a0)
Reviewed-on: http://git-master/r/1219702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|