| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Always use the PROD value for FE_GO_IDLE_TIMEOUT.
Change-Id: I455c03ae07b35a8999cd0995e458c421a10e7ca2
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/813958
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trigger recovery on DS and MEMFMT HWW errors, and write an error line
to UART for each HWW error.
Also capture the channel id before clearing the exception.
Bug 1683059
Change-Id: Ia00d88a76371a4bd7e047915dde0bf0d4b84bc10
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/816983
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
| |
When we induce a fake MMU fault, we do not have pointer to a channel.
Use the tsg pointer instead. Also remove the error print in case we
do not have ch pointer.
Change-Id: I14fd75d2b743244915bf32fe39de76097ef5c42f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/819034
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gk20a_fifo_handle_sched_error(), we currently cancel
the timeout on all the channels
But this could cause us to miss one of stuck channel
hence, instead of cancelling, restart the timeout of channel
on which it is already active
Bug 200133289
Change-Id: I40e7e0e5394911fc110ab6fde39592b885dfaf7d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/816133
Reviewed-by: Ishan Mittal <imittal@nvidia.com>
Tested-by: Ishan Mittal <imittal@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gk20a_channel_timeout_stop(), we take the channel's
timeout lock and then cancel the timeout worker thread
Timeout worker thread also tries to acquire same timeout
lock.
Hence, while cancelling the timeout in gk20a_channel_timeout_stop()
if the timeout_handler is already scheduled, we will have a deadlock
Fix this by moving cancel_delayed_work_sync() out of the locks
Bug 200133289
Bug 1695481
Change-Id: Iea78770180b483a63e5e176efba27831174e9dde
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/815922
Reviewed-by: Ishan Mittal <imittal@nvidia.com>
Tested-by: Ishan Mittal <imittal@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To handle any of the pbdma interrupt, we currently write zero
to pbdma_method0 and then clear the interrupt
But this is insufficient since we cannot use same intr clear
method for all the interrupts
Hence, add intr specific routines to handle those interrupts
NV_PPBDMA_INTR_0_PBENTRY:
- fix the pb_header to have a null opcode
- fix the pbdma_method to have a valid nop
NV_PPBDMA_INTR_0_METHOD:
- fix the pbdma_method to have a valid nop
NV_PPBDMA_INTR_0_DEVICE:
- fix the pb_header to have a null opcode
- go through all pbdma_method0/1/2/3
-- if they contain host s/w methods, replace those
methods with a valid NOP
Bug 200134238
Change-Id: I10c284a6cdc1441f9d437cea65aae00d3c33a8c8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/814561
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding support to remove bar2 mm on gpu module
remove.
Change-Id: Id5f680b1abf7056da9871d5460d9fbc40422673e
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/814571
(cherry picked from commit e7c6c87dd6b0893d26a9a3b4568121a691e1eb3c)
Reviewed-on: http://git-master/r/815429
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, if can_railgate = false, then we have below
sequence to allocate secure_page
- unrailgate GPU (forever)
- reset_assert()
- allocate secure_page
- reset_deassert()
But if we allocate secure page even before unrailgating GPU
for first time, then we can avoid reset_assert()/deassert()
calls since GPU should already be in reset/railgated at
boot time
hence, rework this sequence as below
- init required mutex, set platform->reset_control
- allocate secure page (GPU should already be in reset
at this point)
- gk20a_pm_init() which unrailgates GPU in case of
can_railgate = false
Bug 200137963
Bug 1678611
Change-Id: I79d0543bb5cf1eaf1009e1e6ac142532d84514a5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/797153
(cherry picked from commit 368004501943d38c003747f6bec0384fed57ee65)
Reviewed-on: http://git-master/r/816005
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Return immediately in case there are no buffers to put. This skips
acquiring mutexes and map batch start/finish overheads.
Bug 1614735
Bug 1623949
Bug 1660392
Change-Id: Ief04e36d995e65c1510496c17cb3f5bb90486c69
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/815376
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Bug 1689806
Change-Id: I5090967cd5d14816e4ac3091af0b0c4dca272335
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/814616
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expose API for gpmu bootstrap.
Bug 1685722
Change-Id: I46ca6f8b36e14cd1c6a12eb0d5cd178da2e0be1c
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/812270
(cherry picked from commit bd7ac9992923cc32f2739926400bbf9b5cadc0c1)
Reviewed-on: http://git-master/r/813977
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit aeb74fc7952718ffab6281c687951499510c4333.
User space was fixed not to send zero-length GPFIFO entries.
Bug 1662670
Change-Id: Iec6bf1870a19db4e8daa2ed4512650b92a37ba92
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/811783
|
|\
| |
| |
| |
| | |
Change-Id: I196ec645855419c70703eaefd3d83303bbd9fae5
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The server now exposes a new GMMU map interface that
can accept a scatter-gather list. This is needed to support
SMMU-bypass configurations.
Bug 1677153
JIRA VFND-689
Change-Id: I7b5af145db57dcebe2c9125ec90c689798d7e69e
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/792558
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A SCHED error might cause multiple channels' watchdogs to trigger
simultaneously
Hence, to avoid this conflict cancel watchdog timeout on all
channels before recovering from SCHED errors
Also, define API gk20a_channel_timeout_stop_all_channels()
to cancel wdt timeout on all channels
Bug 200133289
Change-Id: I8324c397891f0a711327b77d0677cd6718af6d01
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/810959
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have another constant timeout of 5s for channel watchdog.
Hence drop default channel timeout (used for SCHED errors)
to 3s so that they both don't conflict with each other
Bug 200133289
Change-Id: Ieed675cad462119ff2f1a155a955c8a22cb6c6f8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/810958
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
export debugfs /d/gpu.0/ch_wdt_timeout_ms to modify
all channels' watchdog timeout
this is needed for testing purpose only
Bug 200133289
Change-Id: I8776b567d5d5a1c304334835b0bcab7b242cf0ab
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/810957
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Channel watchdog timeout is set to a costant value of 5s
as of now
Make this timeout platform specific and set it to 5s for gm20b
and 7s for gk20a
Bug 200133289
Change-Id: I6e7f0fed93a8d5b197ae46807131311196c6636f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/810956
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While loading the context, erstwhile set CDE flag was being
overwritten by copying code of golden context, thus losing
the information. This was not letting the CDE info reach
to the ucode, and T1 was not configured to 128B mem access.
Bug 200096226
Change-Id: I5ceb234a62450ff7875aeba05ec616758cb319d9
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Reviewed-on: http://git-master/r/811767
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Scale timeslice register value based on platform
specific ptimer scale koefficient.
Expose timeslice values through debugfs to simplify performance
tuning.
bug 1605552
bug 1603226
Change-Id: I49f86f22d58d26a366ee1b5f5a9ab9d7f896ad25
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-on: http://git-master/r/800007
(cherry picked from commit 00c85ef24cf28ffaa81eb53fff7edef1c699220a)
Reviewed-on: http://git-master/r/808251
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
bug 1603226
host based timeouts use ptimer for detecting
timeouts. on gk20a and gm20b ptimer runs 2.6x
slower. scale the fifo_eng_timeout to account
for this
Change-Id: Ie44718382953e36436ea47d6e89b9a225d5c2070
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/799983
(cherry picked from commit d1d837fd09ff0f035feff1757c67488404c23cc6)
Reviewed-on: http://git-master/r/808250
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bug 1689976
Change-Id: I97ad14c9698030b630d3396199a2a5296c661392
Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com>
Reviewed-on: http://git-master/r/806590
(cherry picked from commit c90cd5ee674d6357db3be2243950ff0d81ef15ef)
Reviewed-on: http://git-master/r/808249
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
bug 1688374
disabling pmu will break RAM suspend on chips implementing
split rails. pmu will be powered down along with rest of
the GPU anyway. pmu_destroy is not be used outside of
rail gating or gpu suspend
Change-Id: I9e89859b7c701f731276ae1d1063d9ccd88d4334
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/805940
(cherry picked from commit 8ded353878ff7df73e55b702041008ddc3cbf069)
Reviewed-on: http://git-master/r/808248
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bug 200137963
Change-Id: I3197af905c945540b97ba191e5695d970d77af8e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/797154
(cherry picked from commit 8a50245ea636deb87a3d9435fb115b4eac88fac9)
Reviewed-on: http://git-master/r/808247
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add CPU dcache flush after populating scatterBuffer so that the GPU
will see the buffer contents.
Bug 1679453
Change-Id: I564394ed1fcff4d08d753e753bd3243b460d76df
Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com>
Reviewed-on: http://git-master/r/805197
(cherry picked from commit d6a5513745aa77c84ac5408a62f72f24839ef439)
Reviewed-on: http://git-master/r/808246
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bug 1689806
Change-Id: I368ad8fb64e49b21ba61c519def1f86e1ca6e492
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/806116
(cherry picked from commit 1a3bbe989a795d379703e7f4b915f6e1bb38c2c3)
Reviewed-on: http://git-master/r/805480
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It fixed the issue that system hang when reboot.
Bug 1638850
Change-Id: If53a31e86c10b2fce4a22fe4fcf92106d86c95ef
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/803234
(cherry picked from commit 4dbea2c7037a5244ccb9d6e886023c29ba584892)
Reviewed-on: http://git-master/r/808245
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
fix below sparse warning :
drivers/gpu/nvgpu/gp10b/pmu_gp10b.c:245:6: warning: symbol
'gp10b_pmu_elpg_statistics' was not declared. Should it be static?
Bug 200088648
Change-Id: I74a1de9921bb6ba9cc077bf7291e8eeb3d4c82ff
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/810395
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Tested-by: Sachin Nikam <snikam@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Call commit_cb_manager() at context creation time instead of hardware
initialization. This allows per-channel sizes for buffers.
Bug 1686189
Change-Id: Ie4d08e87f237bc63bac0268128f59d4fe8536c95
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/801777
Reviewed-on: http://git-master/r/806181
|
|
|
|
|
|
|
|
|
|
|
| |
Bug 1686189
Change-Id: Idf92d3277a7e8932d11ece13e3b988609e49c74e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/802550
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-on: http://git-master/r/806180
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible that user space requests more unmaps on a buffer
than it requested maps
In this case, we end up dropping one extra refcount which could
lead to releasing buffer early
Fix this by checking and returning if buffer's user_mapped
refcount is already zero
Bug 200130521
Change-Id: Ic8ef2dbfe0476b16d852ad899b1ed0404b5bb7de
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/788904
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Required init param to start elpg
- change in statistics dump
Bug 1684939
Change-Id: I26dca52079f08b8962e9cb758831910207610220
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/802456
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/806179
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- PMU ucode version update to sync
with LS production signature
Bug 200140416
Change-Id: Ib77fa81f7b05ed3cf45c373f3d759a2cfb69b238
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/801738
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/806177
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- pg statistics update
- perfmon update
- ADD GR inti params interface to enable ELPG
Bug n/a
Change-Id: I39ae1d4518733480a42f06a0be7bd794fc93ff6f
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/799684
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/806176
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
load gpccs signatture for secure gpccs boot
Change-Id: Ia8815a4575c42eab2ce62cbece8bb080e1f35ae6
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/793402
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/795583
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct logic for supporting unmapped
ptes during gmmu map.
Bug 1587825
Change-Id: I1b0b603f7758a65d9666046d0d908663f8e460e3
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/796577
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/759345
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create debugfs node before platform->probe() is called.
Allow chip specific debugfs entries go to correct
directory.
bug 1525327
bug 1581799
Change-Id: I2d91bdc1e72dac6787938eff01218c9f871029cb
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-on: http://git-master/r/796092
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/778729
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- pmu version update P4 CL #19870492
- pmu allocation update P4 CL #19870492
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/788791
Reviewed-on: http://git-master/r/786342
Change-Id: If6607cfbb134f22e25148b74d6101a6b9709e155
Reviewed-on: http://git-master/r/807474
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B
aligned, otherwise causes a HW deadlock. Gpu driver makes changes in
FECS header which FECS uses to configure the T1 promotions to aligned
128B accesses.
Bug 200096226
Change-Id: I8a8deaf6fb91f4bbceacd491db7eb6f7bca5001b
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Reviewed-on: http://git-master/r/804625
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vgpu framework and build for T18x.
Bug 1677153
JIRA VFND-693
Change-Id: Icf9fd8e0b5769228aee59c54f9b000b992e5fcca
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/792559
Reviewed-on: http://git-master/r/806178
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dump GR registers on ucode timeout.
GR dump is needed during ucode timeout to
get more details.
Bug 200124360
Change-Id: Id19f5bc0d092c060de2ec07a5e63a0a155f86b76
Signed-off-by: Gagan Grover <ggrover@nvidia.com>
Reviewed-on: http://git-master/r/771969
(cherry picked from commit 3f0f13073a174a357623d76db47b2148cb24503c)
Reviewed-on: http://git-master/r/777785
(cherry picked from commit d5b7247757cdccbc3ea98c4b9e018468d5554933)
Reviewed-on: http://git-master/r/795355
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling null pointer in gk20a_fence_is_expired.
Bug 200117724
Change-Id: I0f9307a5f8b82bf990b6ddaea1a408d4f3f376fb
Signed-off-by: Gagan Grover <ggrover@nvidia.com>
Reviewed-on: http://git-master/r/777796
(cherry picked from commit dbf5bae53e0e7862754faba78eab84284786ecb3)
Reviewed-on: http://git-master/r/795356
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for CDE scatter buffers. When the bus addresses for
surfaces are not contiguous as seen by the GPU (e.g., when SMMU is
bypassed), CDE swizzling needs additional per-page information. This
information is populated in a scatter buffer when required.
Bug 1604102
Change-Id: I3384e2cfb5d5f628ed0f21375bdac8e36b77ae4f
Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com>
Reviewed-on: http://git-master/r/789436
Reviewed-on: http://git-master/r/791243
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement per-channel watchdog/timer as per below rules :
- start the timer while submitting first job on channel or if
no timer is already running
- cancel the timer when job completes
- re-start the timer if there is any incomplete job left
in the channel's queue
- trigger appropriate recovery method as part of timeout
handling mechanism
Handle the timeout as per below :
- get timed out channel, and job data
- disable activity on all engines
- check if fence is really pending
- get information on failing engine
- if no engine is failing, just abort the channel
- if engine is failing, trigger the recovery
Also, add flag "ch_wdt_enabled" to enable/disable channel
watchdog mechanism. Watchdog can also be disabled using
global flag "timeouts_enabled"
Set the watchdog time to be 5s using macro
NVGPU_CHANNEL_WATCHDOG_DEFAULT_TIMEOUT_MS
Bug 200133289
Change-Id: I401cf14dd34a210bc429f31bd5216a361edf1237
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/797072
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add below APIs to disable/re-enable activity on all
engines
gk20a_fifo_disable_all_engine_activity()
gk20a_fifo_enable_all_engine_activity()
Bug 200133289
Change-Id: Ie01a260d587807a3c1712ee32fe870fbcb08f9cd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/798747
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
| |
This reverts commit 882975f7f1b4e050be79b0a047a2daa8b53a9187.
Change-Id: I4940fc9f7a837840be1ea8e42d58d603235d88d5
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/804616
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of CDE channel, T1 (Tex) unit needs to be promoted to 128B
aligned, otherwise causes a HW deadlock. Gpu driver makes changes in
FECS header which FECS uses to configure the T1 promotions to aligned
128B accesses.
Bug 200096226
Change-Id: Ic006b2c7035bbeabe1081aeed968a6c6d11f9995
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Reviewed-on: http://git-master/r/802327
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Disable timestamp slcg
Bug 1670996
Change-Id: I1d6d6348c4c136070846c9c93f75006a42a17895
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/800791
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, while releasing the debug session we enable powergate
only if a channel is bound to session
If a session has no channel bound to it, and has powergate
disabled, then we do not enable powergate when that session
is closed
Fix this by calling dbg_set_powergate(POWERGATE_ENABLE) always
while releasing the session
Refcounting and sanity checks in dbg_set_powergate() will take
care of situation if powergate was not disabled by the session
in first place
Bug 1679372
Change-Id: I4e027393c611d3e8ab4f20e195f31871086da736
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/796999
Tested-by: Sandarbh Jain <sanjain@nvidia.com>
Reviewed-by: Sandarbh Jain <sanjain@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug 1625901
1) disable ELPG before doing GR reset when runlist update times out
2) add mutex for GR reset to avoid multiple threads resetting GR
3) protect GR reset with FECS mutex so that no one else submits methods
Change-Id: I02993fd1eabe6875ab1c58a40a06e6c79fcdeeae
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/793643
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|