| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a memory leak introduced when making the priv struct for
TSGs.
Fix another memory leak when introducing a priv struct for
channels.
Bug 1816516
Change-Id: I7b0e62bb6352f7e65acb5501cab9cef055d1f535
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1266889
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow forced channel freeing. This is useful when the driver is
being cleaned up and the gk20a_wait_until_counter_is_N() could
potentially hang.
Bug 1816516
Bug 1807277
Change-Id: I711f5f3f6413d0bb30b4857e785ca3b504b494ee
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1250022
(cherry picked from commit e132d0e5ae77d758680ac708622a4883bbd69ba3)
Reviewed-on: http://git-master/r/1261918
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow SW to force a semaphore release. Typically SW waits for a
semaphore to reach the value right before the semaphore release
value before doing a release. For example, assuming a semaphore
is to be released by SW by writing a 10 into the semaphore, the
code waits for the semaphore to get to 9 before writing 10.
The problem with this happens when trying to shutdown the GPU
unexpectedly. When aborting a channel after the GPU has terminated
the GPU is potantially no longer processing anything. If a SW
semaphore release is waiting on the semaphore to reach N-1 before
writing N to the semaphore N-1 may never get written by the GPU.
This obviously causes a hang in the SW shutdown. The solution is
to let SW force a semaphore release in the channel_abort case.
Bug 1816516
Bug 1807277
Change-Id: Ib8b4afd86102eacf372362b1748fb6ca04e6fa66
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1250021
(cherry picked from commit 2e9fa40902d2c4d5a1febe0bf2db420ce14bc633)
Reviewed-on: http://git-master/r/1261915
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The private_data field in the file pointer passed to release() for
channels originally pointed directly to the referenced channel. The
problem with this is that when the driver is killed and the channel
mmeory is freed that pointer becomes invalid.
The necessity of that channel is to get access to the gk20a struct that
owns the channel. This can instead be accomplished by making a new
private data struct that has a pointer to the gk20a struct directly
instead of requiring the channel to be valid. This lets the release()
function work even if the channels are gone (though in such cases the
release function doesn't do very much).
Change-Id: I5e50bb5b6dd08d38974f8e7b46ba125e9a3f1922
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1246586
(cherry picked from commit 14b7c380c74d2caeb04c47ad3e33332a423a84bb)
Reviewed-on: http://git-master/r/1261913
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case one job completes just around timeout boundary,
it is possible that we launch both clean up worker and
timeout worker for same job
Then in clean up worker we try to cancel timeout
worker, and in timeout worker we try to wait for clean
up to finish which leads to deadlock with below stacks
stack 1:
[<ffffffc0000bb484>] cancel_delayed_work_sync+0x10/0x18
[<ffffffc0004f820c>] gk20a_channel_cancel_job_clean_up+0x20/0x44
[<ffffffc0004fc794>] gk20a_channel_abort_clean_up+0x34/0x31c
[<ffffffc0004fcb30>] gk20a_channel_abort+0xb4/0xc0
[<ffffffc0004f3d18>] gk20a_fifo_recover_ch+0x9c/0xec
[<ffffffc0004f3f04>] gk20a_fifo_force_reset_ch+0xdc/0xf8
[<ffffffc0004fa8c4>] gk20a_channel_timeout_handler+0xf8/0x128
stack 2:
[<ffffffc0000bb484>] cancel_delayed_work_sync+0x10/0x18
[<ffffffc0004f82c4>] gk20a_channel_timeout_stop+0x40/0x60
[<ffffffc0004fc488>] gk20a_channel_clean_up_jobs+0x7c/0x238
To fix this, cancel the timeout worker in
gk20a_channel_update() itself instead of cancelling in
gk20a_channel_clean_up_jobs()
Bug 200246829
Change-Id: Idef9de3cae29668f4e25beb564422cf2e3736182
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1259963
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add function pointer to add chip specific init_inst_block.
Update this function pointer for gk20a and gm20b.
JIRA GV11B-21
Change-Id: I74ca6a8b4d5d1ed36f7b25b7f62361c2789b9540
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1254875
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change fixes error handling logic in
gk20a_alloc_channel_gpfifo(). In cases, where we don't
allocate a channel_sync at gpfifo allocation time,
we shouldn't attempt to destroy it while handling
an error.
Bug 200253447
Change-Id: I57a78c74bbce84fa17fb0360c59b8f413a9124a7
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1255858
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix small problems related to signed versus unsigned comparisons
throughout the driver. Bump up the warning level to prevent such
problems from occuring in future.
Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1252068
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have never used the IOCTL FREE_OBJ_CTX. Using it leads to context
being only partially available, and can lead to use-after-free.
Bug 1834225
Change-Id: I9d2b632ab79760f8186d02e0f35861b3a6aae649
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1250004
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
Skip checking of u32 event_id if it's smaller than zero.
Change-Id: I207c244eeff10f294c41a76b53f9393d50a84026
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1249967
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia1157198aad248e12e94823eb9f273497c724b2c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1248366
Tested-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when we receive a semaphore wakeup interrupt,
we call the channel_update callback, which schedules
deferred job clean-up.
For deterministic channels, we don't allow semaphore-backed
syncs anyways. That means for these channels, if we get a
semaphore wakeup interrupt, it must be for a userspace-managed
semaphore. In this case, there is no need to call into the
channel_update callback. So for deterministic channels, we
skip this.
Bug 1795076
Change-Id: I4cdfecd53144078c5cd4be8a41c5c3b7d74c338e
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1225620
(cherry picked from commit 64a6db0080c3b198ddc2029544f52eb590dc08ff)
Reviewed-on: http://git-master/r/1225615
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only free the per-channel preallocated job-tracking resources during
channel allocation error path if they have actually been allocated.
Bug 1795076
Change-Id: I2de90504f1042ce372337b68c5405727b4e4abb4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1234983
(cherry picked from commit 62cb75c6baa02d0edecd1f81f1b8b80a985fd715)
Reviewed-on: http://git-master/r/1238329
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change makes the invocation of the deferred job clean-up
mechanism conditional. For submissions that require job tracking,
deferred clean-up is only required if any of the following
conditions are met:
1) Channel's deterministic flag is not set
2) Rail-gating is enabled
3) Channel WDT is enabled
4) Buffer refcounting is enabled
5) Dependency on Sync Framework
In case deferred clean-up is not needed, we clean-up
a single job tracking resource in the submit path. For
deterministic channels, we do not allow deferred clean-up to
occur and fail any submits that require it.
Bug 1795076
Change-Id: I4021dffe8a71aa58f12db6b58518d3f4021f3313
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1220920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
(cherry picked from commit b09f7589d5ad3c496e7350f1ed583a4fe2db574a)
Reviewed-on: http://git-master/r/1223941
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change fixes up the calculation of gpfifo entries, to be
allocated depending on the ioctl used:
1) For the legacy ALLOC_GPFIFO ioctl, we preserve the calculation
of gpfifo entries within the kernel.
2) For the new ALLOC_GPFIFO_EX ioctl, we assume that userspace has
pre-calculated power-of-2 value. We process this value un-modified
and only verify that it is a valid power-of-2.
Bug 1795076
Change-Id: I8d2ddfdae40b02fe6b81e63dfd8857ad514a3dfd
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1220968
(cherry picked from commit c42396d9836e9b7ec73e0728f0c502b63aff70db)
Reviewed-on: http://git-master/r/1223937
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds a new ioctl flag,
NVGPU_SUBMIT_GPFIFO_FLAGS_DETERMINISTIC, which indicates that a
gpfifo submission must exhibit deterministic behavior within the
kernel.
For submissions that require job tracking and also set
this flag, we require the channel to have previously
pre-allocated job tracking resources.
Bug 1795076
Change-Id: I0496a2513c6c683fcda161b32db9e7ee6712d45c
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1210527
(cherry picked from commit 0a36a0ce3a6cbe398931993e742fc928f7b2c0aa)
Reviewed-on: http://git-master/r/1223935
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for pre-allocation of job tracking resources
w/ new (extended) ioctl. Goal is to avoid dynamic memory
allocation in the submit path. This patch does the following:
1) Intoduces a new ioctl, NVGPU_IOCTL_CHANNEL_ALLOC_GPFIFO_EX,
which enables pre-allocation of tracking resources per job:
a) 2x priv_cmd_entry
b) 2x gk20a_fence
2) Implements circular ring buffer for job
tracking to avoid lock contention between producer
(submitter) and consumer (clean-up)
Bug 1795076
Change-Id: I6b52e5c575871107ff380f9a5790f440a6969347
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1203300
(cherry picked from commit 9fd270c22b860935dffe244753dabd87454bef39)
Reviewed-on: http://git-master/r/1223934
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change is the first of a series of changes to
support the usage of pre-allocated job tracking resources
in the submit path. With this change, we still maintain a
dynamically-allocated joblist, but make the necessary changes
in the channel_sync & fence framework to use in-place
allocations. Specifically, we:
1) Update channel sync framework routines to take in
pre-allocated priv_cmd_entry(s) & gk20a_fence(s) rather
than dynamically allocating themselves
2) Move allocation of priv_cmd_entry(s) & gk20a_fence(s)
to gk20a_submit_prepare_syncs
3) Modify fence framework to have seperate allocation
and init APIs. We expose allocation as a seperate API, so
the client can allocate the object before passing it into
the channel sync framework.
4) Fix clean_up logic in channel sync framework
Bug 1795076
Change-Id: I96db457683cd207fd029c31c45f548f98055e844
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1206725
(cherry picked from commit 9d196fd10db6c2f934c2a53b1fc0500eb4626624)
Reviewed-on: http://git-master/r/1223933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If User does not close the event fd created on channel/TSG,
it is possible that the event stays on channel/TSG
structure and reappears when channel/TSG is re-opened
This causes false failure when we try to enable some event
of channel/TSG since we do not allow enabling same event
twice on same channel/TSG
Fix this by removing all enabled events from channel/TSG
while closing it
Bug 200243092
Bug 1818654
Change-Id: I9d5ffc89f87cf4c44124f8015c2c2f0587ad2ef4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1237723
(cherry picked from commit 2737a5c86cf5fbfe8a04f6a87764e8ecb9b30555)
Reviewed-on: http://git-master/r/1238266
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When bar1 memory is not supported then userd will be
allocated from sysmem.
Functions gp_get and gp_put are updated accordingly.
JIRA GV11B-1
Change-Id: Ia895712a110f6cca26474228141488f5f8ace756
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1225384
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A use-after-free scenario is possible where one thread in
gk20a_free_error_notifiers() is trying to free the error
notifier and another thread in gk20a_set_error_notifier()
is still using the error notifier
Fix this by introducing mutex error_notifier_mutex for
error notifier accesses
Take mutex in gk20a_free_error_notifiers() and in
gk20a_set_error_notifier() before accessing notifier
In gk20a_init_error_notifier(), set the pointer
ch->error_notifier_ref inside the mutex and only
after notifier is completely initialized
Bug 1824788
Change-Id: I47e1ab57d54f391799f5a0999840b663fd34585f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1233988
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cancel timeout handler before cleaning up the list of jobs. This
prevents a race that makes timeout handler access already freed
jobs.
Bug 1814108
Change-Id: I37cfc408cb1f96b8b0e62db1ca8067a2ae43dd0e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1221698
(cherry picked from commit be0d146cba8dc2b1bdb7c53ae39188a4bf0ca019)
Reviewed-on: http://git-master/r/1223843
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the priv_cmd buffer is full, return EAGAIN to userspace,
so it may retry to submit ioctl.
Bug 1795076
Change-Id: I0752d52b677aaf915e8e472bec6140e14c885589
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1213586
(cherry picked from commit fc6b23559a839620accd5bbd2957e69310b87a5b)
Reviewed-on: http://git-master/r/1229488
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We previously used to wait on the last_submit fence
before disabling a channel. Since this part of the
code is no longer exercised, we can remove this
tracking.
Bug 1795076
Change-Id: I54ba2ebaf48772aa775654c0fb4ab614a7167969
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1206585
Reviewed-by: Automatic_Commit_Validation_User
(cherry picked from commit e4e236f2b487b8cfa31f7afd29fad3c97de5f844)
Reviewed-on: http://git-master/r/1209166
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Free the hw_sema before releasing a channel's address space
binding when freeing a channel. Since the semaphore pool
can be freed after releasing the address space, we need
to do this earlier on.
Bug 1795076
Change-Id: Ic8ae7510af7be862feb6694130c6ce8fc0b8e411
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1208071
(cherry picked from commit 82a52fb6789b1c9361c1567f082ca36135287294)
Reviewed-on: http://git-master/r/1209165
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change improves the aggressive sync creation
& destruction logic to avoid lock contention in
the submit path. It does the following:
1) Removes the global sync destruction (channel)
threshold, and adds a per-platform parameter.
2) Avoids lock contention in the clean-up/submit
path when aggressive sync destruction is disabled.
3) Creates sync object at gpfifo
allocation time (as long as we are not in aggressive
sync destroy mode), to enable faster first submits
Bug 1795076
Change-Id: Ifdb680100b08d00f37338063355bb2123ceb1b9f
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1202425
(cherry picked from commit ac0978711943a59c6f28c98c76b10759e0bff610)
Reviewed-on: http://git-master/r/1202427
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Submit job-tracking is necessary for any of the following
conditions:
- pre- or post-fence functionality
- channel wdt
- GPU rail-gating
- buffer refcounting
If none of the conditions are met, then job tracking is not
required and a fast submit can be done (ie. only need to
write out userspace GPFIFO entries and update GP_PUT).
Bug 1795076
Change-Id: If94d195e3a18a6b623e167829d291ec98a7a43a1
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/1203511
(cherry picked from commit 13d7cfe94559dc52cb0bba7f9e48848e0858be81)
Reviewed-on: http://git-master/r/1223066
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the submit synchornization code into it's own function. This should
help keep the submit code path a little more readable and understandable.
Bug 1732449
Reviewed-on: http://git-master/r/1203833
(cherry picked from commit f931c65c166aeca3b8fe2996dba4ea5133febc5a)
Change-Id: I4111252d242a4dbffe7f9c31e397a27b66403efc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221043
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use gk20a_gmmu_alloc() in gk20a_alloc_inst_block() so that
we always try to allocate all inst blocks in vidmem first
Also use common API gk20a_alloc_inst_block() in
channel_gk20a_alloc_inst() as well
Jira DNVGPU-22
Change-Id: I6c47c19aae1189d7e57f47a51d21a32e2df53c1f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1216140
(cherry picked from commit 6c84961a50eb8a8b080b2db08f87e58143f5a6e8)
Reviewed-on: http://git-master/r/1219704
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use gk20a_gmmu_alloc() to allocate channel inst block
which first tries to allocate in vidmem
Jira DNVGPU-22
Change-Id: Ib4d92bf4d2bc0c3d53a82812d635fa8abca4340a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1206274
(cherry picked from commit 0c81c8984c42df27d3520f800eb87728f67d4453)
Reviewed-on: http://git-master/r/1219701
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When trying to power down GPU the engine might be still busy. In this
case delay power down by returning -EBUSY from
gk20a_pm_runtime_suspend().
Bug 200224907
Change-Id: Ibad74c090add24a185bc1a7a02df367af9b95ced
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1213042
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of blocking for gpfifo space in the nvgpu driver,
return -EAGAIN and allow userspace to decide the blocking
policy.
Bug 1795076
Change-Id: Ie091caa92aad3f68bc01a3456ad948e76883bc50
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/1202591
(cherry picked from commit 8056f422c6a34a4239fc4993c40c2e517c932714)
Reviewed-on: http://git-master/r/1203800
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The channel timeout lock guards a very small critical section. Use a
spinlock instead of a mutex for performance.
Bug 1795076
Change-Id: I94940f3fbe84ed539bcf1bc76ca6ae7a0ef2fe13
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/1200803
(cherry picked from commit 4fa9e973da141067be145d9eba2ea74e96869dcd)
Reviewed-on: http://git-master/r/1203799
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for cyclestats snapshots in the virtual case
Bug 1700143
JIRA EVLR-278
Change-Id: I376a8804d57324f43eb16452d857a3b7bb0ecc90
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1211547
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the common gk20a_gmmu_alloc() that tries vidmem too.
Jira DNVGPU-21
Change-Id: Ie22cb0f5ed70ec71567fc85d348b3526c9a32b02
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204304
(cherry picked from commit 07cb99baeb10194c520addd77517841a6f99df93)
Reviewed-on: http://git-master/r/1169310
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is done to boost performance of the GPU submit time, which
is critical for compute use-cases.
Bug 200215465
Bug 1804898
Conflicts:
drivers/gpu/nvgpu/gk20a/channel_gk20a.c
Change-Id: Ic4884ee4eac910b92b84a47fdc1b2e9f26b2f1f0
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/1199860
Reviewed-on: http://git-master/r/1209834
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While collecting failing engine data, id type (is_tsg) was not
set for ctxsw and save engine states. This could result in some
ctxsw timeout interrupts to be ignored (id reported with wrong
is_tsg).
For TSGs, check if we made some progress on any of the channels
before kicking fifo recovery.
Bug 200228310
Jira EVLR-597
Change-Id: I231549ae68317919532de0f87effb78ee9c119c6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1204035
(cherry picked from commit 7221d256fd7e9b418f7789b3d81eede8faa16f0b)
Reviewed-on: http://git-master/r/1204037
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- let force_reset_ch pass down err code
- force_reset_ch callback can cover vgpu too.
Bug 1776876
JIRA VFND-2151
Change-Id: I48f7890294c6455247198e0cab5f21f83f61f0e1
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1202255
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We currently store fault_id into fifo.deferred_fault_engines
and use that in gk20a_fifo_reset_engine() which is incorrect
Also, in deferred engine reset path during channel close,
we do not check if channel is loaded on engine or not
fix this with below
- store engine_id bits into fifo.deferred_fault_engines
- define new API gk20a_fifo_deferred_reset() to perform
deferred engine reset
- get all engines on which channel is loaded with
gk20a_fifo_engines_on_id()
- for each set bit/engine_id in fifo.deferred_fault_engines,
check if channel is loaded on that engine, and if yes,
reset the engine
Bug 1791696
Change-Id: I1b8b1a9e3aa538fe6903a352aa732b47c95ec7d5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1195087
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Free the channel used semaphore index during gk20a_free_channel().
Bug 1793819
Change-Id: I4215d05f7f3ba0636e2abb1803011711c8a38301
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1196877
(cherry picked from commit 2c5720de506caac29629f6a1c578e6da80b1a135)
Reviewed-on: http://git-master/r/1198883
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialize character array buf in gk20a_channel_ioctl() to zero
Keeping it uninitialized can result in leaking kernel stack
info to user space since we pass this buffer to UMD
Bug 1793398
Change-Id: Iffd654dbaca3b4e3c8fd2ac270d0febd01c165b8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1195862
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added interface to allow kernel to create privileged CE channels for
page migration and clearing support between sysmem and videmem.
JIRA DNVGPU-53
Change-Id: I3e18d18403809c9e64fa45d40b6c4e3844992506
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1173085
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When processing FECS traces, a hash table is used
to retrieve the 'pid' of the process that created
the channel/TSG. Report process identifer (aka
tgid in kernel) instead of thread identifier (aka
pid) for FECS traces.
Bug 1736423
Change-Id: I54cb9d298b9fe3e1cccdd7145604cd01c5758c9d
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1166501
(cherry picked from commit f7fd1f6d7ad0753b787ec20604a08a1f4882fe6f)
Reviewed-on: http://git-master/r/1168728
(cherry picked from commit 97a62e5b89352fce576f1bca71b38bf2242ff047)
Reviewed-on: http://git-master/r/1177823
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace kfree with nvgpu_free in error handling path in
gk20a_alloc_channel_gpfifo where the gpfifo pipe buffer is being
allocated, because it's allocated with nvgpu_alloc.
Jira DNVGPU-21
Change-Id: I73100394b67da2ab064e4e9df6b430d818abce56
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1182401
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For devices that have vidmem available, use the vidmem allocator in
gk20a_gmmu_alloc{,attr,_map,_map_attr}. For others, use sysmem.
Because all of the buffers haven't been tested to work in vidmem yet,
rename calls to gk20a_gmmu_alloc{,attr,_map,_map_attr} to have _sys at
the end to declare explicitly that vidmem is used. Enabling vidmem for
each now is a matter of removing "_sys" from the function call.
Jira DNVGPU-18
Change-Id: Ibe42f67eff2c2b68c36582e978ace419dc815dc5
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1176805
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
add gk20a_aperture_mask() for memory target selection now that buffers
can actually be allocated from vidmem, and use it in all cases that have
a mem_desc available.
Jira DNVGPU-76
Change-Id: I4353cdc6e1e79488f0875581cfaf2a5cfb8c976a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1169306
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible that when we abort the channel, we have
job clean up worker running, which could race with abort
and sometimes result in below panic
[ 245.483566] Unable to handle kernel paging request at virtual address
800000000
...
[ 245.548991] PC is at gk20a_channel_abort_clean_up+0xb8/0x140
[ 245.554683] LR is at gk20a_channel_abort_clean_up+0xac/0x140
...
[ 247.301860] [<ffffffc000479390>]
gk20a_channel_abort_clean_up+0xb8/0x140
[ 247.312853] [<ffffffc0004794d4>] gk20a_channel_abort+0xbc/0xc8
[ 247.322970] [<ffffffc0004794f8>] gk20a_disable_channel+0x18/0x30
[ 247.333267] [<ffffffc000479628>] gk20a_free_channel+0x118/0x584
[ 247.343473] [<ffffffc000479aa0>] gk20a_channel_close+0xc/0x14
[ 247.353479] [<ffffffc000479b80>] gk20a_channel_release+0xd8/0x104
Fix this by cancelling the job clean up worker before aborting
the channel
Bug 1777281
Change-Id: Ic24c7c03b27cfb5cd164a52efdb1e2813a41a10a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1174416
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revamp the support the nvgpu driver has for semaphores.
The original problem with nvgpu's semaphore support is that it
required a SW based wait for every semaphore release. This was
because for every fence that gk20a_channel_semaphore_wait_fd()
waited on a new semaphore was created. This semaphore would then
get released by SW when the fence signaled. This meant that for
every release there was necessarily a sync_fence_wait_async() call
which could block. The latency of this SW wait was enough to cause
massive degredation in performance.
To fix this a fast path was implemented. When a fence is passed to
gk20a_channel_semaphore_wait_fd() that is backed by a GPU semaphore
a semaphore acquire is directly used to block the GPU. No longer is
a sync_fence_wait_async() performed nor is there an extra semaphore
created.
To implement this fast path the semaphore memory had to be shared
between channels. Previously since a new semaphore was created
every time through gk20a_channel_semaphore_wait_fd() what address
space a semaphore was mapped into was irrelevant. However, when
using the fast path a sempahore may be released on one address
space but acquired in another.
Sharing the semaphore memory was done by making a fixed GPU mapping
in all channels. This mapping points to the semaphore memory (the
so called semaphore sea). This global fixed mapping is read-only to
make sure no semaphores can be incremented (i.e released) by a
malicious channel. Each channel then gets a RW mapping of it's own
semaphore. This way a channel may only acquire other channel's
semaphores but may both acquire and release its own semaphore.
The gk20a fence code was updated to allow introspection of the GPU
backed fences. This allows detection of when the fast path can be
taken. If the fast path cannot be used (for example when a fence is
sync-pt backed) the original slow path is still present. This gets
used when the GPU needs to wait on an event from something which
only understands how to use sync-pts.
Bug 1732449
JIRA DNVGPU-12
Change-Id: Ic0fea74994da5819a771deac726bb0d47a33c2de
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1133792
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added interface for privileged channel allocation to excute
the privileged method (ex. CE phys mode transfer).
JIRA DNVGPU-53
Change-Id: I07f9181720b14345cf5890919c2818dfcf505d86
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1169315
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CUDA needs it disabled.
Bug 1775453
Change-Id: Ic6d5050f9fda259337668e2a245c05e27d65e047
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1162765
(cherry picked from commit 44b48d84e75ced2fd9eecebbe94a0289c527c0c2)
Reviewed-on: http://git-master/r/1169049
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|