| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocate vidmem fds from 1024 onwards. This prevents us from
using up the 0-1023 range which is tracked per process, and
fits within FD_SETSIZE.
Bug 200222681
Change-Id: I104b81f2831f1816ff66fc245fa63013d78001ec
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1199269
(cherry picked from commit 5d5cbaf6a63dd31538fa35081b70e103d8a658f4)
Reviewed-on: http://git-master/r/1217294
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unknown engine is expected, as we do not support all dGPU engines.
Remove the error spew.
JIRA DNVGPU-26
Change-Id: I6f7897c6ead168f1d8100421d16d0540a7f7b542
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1206449
(cherry picked from commit 4cc610755df94065afd28a90c63aca8fff9685b1)
Reviewed-on: http://git-master/r/1217292
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for cyclestats snapshots in the virtual case
Bug 1700143
JIRA EVLR-278
Change-Id: I376a8804d57324f43eb16452d857a3b7bb0ecc90
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1211547
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have completely different versions of probe for
nvgpu and pci device
Extract out common steps into nvgpu_probe() function
and separate it out in new file nvgpu_common.c
Divide task of nvgpu_probe() into further smaller
functions
Do platform specific things (like irq handling,
memresource management, power management) only in
individual probes and then call nvgpu_probe() to
complete the common initialization
Move all debugfs initialization to common gk20a_debug_init()
This also helps to bringup all debug nodes to pci device
Pass debugfs_symlink name as a parameter to gk20a_debug_init()
This allows us to set separate debugfs symlink for nvgpu
and pci device
In case of railgating, cde and ce debugfs, check if
platform supports them or not
Copy vidmem_is_vidmem from platform to mm structure
and set it to true for pci device
Return from gk20a_scale_init() if we don't have either of
governor or qos_notifier
Fix gk20a_alloc_debugfs_init() and gk20a_secure_page_alloc()
to receive device pointer instead of platform_device
Export gk20a_railgating_debugfs_init() so that we can call
it from gk20a_debug_init()
Jira DNVGPU-56
Jira DNVGPU-58
Change-Id: I3cc048082b0a1e57415a9fb8bfb9eec0f0a280cd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204207
(cherry picked from commit add6bb0a3d5bd98131bbe6f62d4358d4d722b0fe)
Reviewed-on: http://git-master/r/1204462
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have blocking 1sec wait for vidmem allocation
Remove this blocking wait and just return proper error
code to the caller
In case we have some buffers to be cleaned up in the
list (clear_list_head), return EAGAIN so that caller
can retry
Otherwise return ENOMEM indicating that no memory is
available right now
Jira DNVGPU-84
Change-Id: Ife2b17c989fc80e568f03bb18ad75b93a25be962
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204969
(cherry picked from commit 2bacdf0bc6d5b1cdcb8be37e574ca5f4f0663cae)
Reviewed-on: http://git-master/r/1213451
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the common gk20a_gmmu_alloc() that tries vidmem too.
Jira DNVGPU-24
Change-Id: I5dfd7eaab737a5290b4d21ac575d6b89777a567e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1209077
(cherry picked from commit e3085d37735c8f1cf4845621f29fe9d2689aad4b)
Reviewed-on: http://git-master/r/1184330
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Tested-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In pramin_access_batched(), in each iteration of the
loop we first decide size of data that we should write
in that iteration.
In case this size is equal to length of the chunk, we
need to move to use next chunk for subsequent iteration
But since we change offset variable before we check
above, we end up using same chunk in next iteration
Fix this by correcting the sequnce to first check if
we should move to next chunk and then only adjust
the offset variable
Jira DNVGPU-24
Change-Id: I58c2e24678f4c6dfbe33bf111edd06788629eca8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1210892
(cherry picked from commit 83cc179199692d28a93b3b884c9bc094ff513298)
Reviewed-on: http://git-master/r/1213450
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unbind on nvgpu results in kernel panic.
Suppress it to avoid kernel panic.
Proper fix should follow later on.
bug 1779085
Change-Id: Ibc966ac031f7f04406db63310e2f5ea126649ac0
Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com>
Reviewed-on: http://git-master/r/1212759
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Converting return value of sg_dma_address() (which is u64)
into a pointer results in compilation failure on 32 bit
machines
Hence convert address first into uintptr_t and then into
pointer
Change-Id: I8e036af8f4c936b88883cf8af1491f03025ed356
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1211243
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While mapping the buffer, first check if buffer is in
vidmem, and if yes convert allocation into base address
And then walk through each chunk to decide the alignment
Add new API gk20a_mm_get_align() which returns the
alignment based on scatterlist and aperture, and use
this API to get alignment during mapping
Enable big page support for pci by unsetting disable_bigpage
Jira DNVGPU-97
Change-Id: I358dc98fac8103fdf9d2bde758e61b363fea9ae9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1207673
(cherry picked from commit d14d42290eed4aa7a2dd2be25e8e996917a58e82)
Reviewed-on: http://git-master/r/1210959
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocate 64k pages for vidmem by default
Also make sure that base address of vidmem is aligned
to page size
Jira DNVGPU-20
Change-Id: Ie2e5111f942467754db5b45f1518d72c925d3d19
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1206405
(cherry picked from commit 542ebf7f571ba6dc631466e562f7d8e05df4a9a6)
Reviewed-on: http://git-master/r/1210958
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the common gk20a_gmmu_alloc() that tries vidmem too.
Jira DNVGPU-20
Change-Id: I4ea02bc4962d299c6f71444048d4a2a22bd80f55
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1206404
(cherry picked from commit 7297727cce8c5c7b26f82afe98cc5428135b4777)
Reviewed-on: http://git-master/r/1178831
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add new API gk20a_mem_get_base_addr() which will return vidmem
base address in case of vidmem and IOVA address in case of
sysmem
Even though vidmem allocations are non-contiguous, this API
is useful (and should only be used) for allocations with one
chunk (e.g. page tables)
Also, since page tables could either reside in sysmem or vidmem,
use this API to get address of page tables
Jira DNVGPU-20
Change-Id: Ie04af9ca7bfccfec1a8a8e4be2c507cef5cef8e1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1206403
(cherry picked from commit a8c74dc188878f2948fa1e0e47bf1837fba6c5e0)
Reviewed-on: http://git-master/r/1210957
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocting blob space for pmu might need fixed address
allocation in vidmem and during boot up
But if some page tables are allocated before blob space,
blob space allocation could fail
Fix this by allocating blob space early during boot up
Jira DNVGPU-20
Change-Id: I30eca1023c8f8f8be101bb7e160ba57a7040911a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1206402
(cherry picked from commit fad4309ce345ed3879f497bda27f2eceb1084dbb)
Reviewed-on: http://git-master/r/1210956
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gk20a_fence_wait() api may be interrupted by a signal before actual
its timeout elapsed. This CL does retry (-ERESTARTSYS) mechanism
if gk20a_fence_wait() return before its timeout elapsed.
Bug 200230544
Change-Id: I347ed2004935a8b9413f95dcb6fca2b74bf49f2a
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1206265
(cherry picked from commit d3ef533942487785d84d109f985ae648eb3c2434)
Reviewed-on: http://git-master/r/1210955
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the common gk20a_gmmu_alloc() that tries vidmem too.
Jira DNVGPU-21
Change-Id: Ie22cb0f5ed70ec71567fc85d348b3526c9a32b02
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204304
(cherry picked from commit 07cb99baeb10194c520addd77517841a6f99df93)
Reviewed-on: http://git-master/r/1169310
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
API pramin_access_batched() currenly only supports contiguous
allocations.
Modify this API to support non-contiguous allocations from
page allocator as well
Update gk20a_mem_wr32() and gk20a_mem_rd32()to reuse
pramin_access_batched()
Use gk20a_memset() in gk20a_gmmu_free_attr_vid() to clear
vidmem pages for kernel buffers
Jira DNVGPU-30
Change-Id: I43630912f4837d8ebc6b9c58f4f427218ef9725b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204303
(cherry picked from commit 2f84f141d02fd2f641cb18a48896fb3ae5f7e51f)
Reviewed-on: http://git-master/r/1210954
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store allocator pointer for each mem_desc
This pointer should be used while freeing the mem
instead of assuming a common allocator
Add flag user_mem to mem_desc which will be set
only in case of User vidmem allocations
We will delay free of mem in worker only if this
flag is set on mem. Otherwise, we will free it
immediately
This is needed so that all kernel allocations
can work with both sysmem and vidmem
Jira DNVGPU-84
Change-Id: Ib9a9209b164bc56b7880448f86bd6d42b324cc86
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1203099
(cherry picked from commit 8f0b0122f36a0b6f1932fa9a98d7eb03b1f623d1)
Reviewed-on: http://git-master/r/1210953
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In __gk20a_alloc_pages(), if we fail to allocate a chunk
we free previously allocated chunks in error path
But we do not free up the memory reserved in those chunks
which could lead to OOM situations
Fix this by calling gk20a_free() for each chunk in error
path
Jira DNVGPU-96
Change-Id: I68aa18d68a5282405016e688c790ccbc0c2a0d69
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1203098
(cherry picked from commit f096bd1675600f4e2fc2d686f2911bb945fbbf0b)
Reviewed-on: http://git-master/r/1210952
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do not need sync_fence for CE jobs submitted in
gk20a_ce_execute_ops() since all the waiters of
fence are in kernel space only
Jira DNVGPU-84
Change-Id: Idad6c40abcefb86e60a5327bbbff6827b1ca33cc
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1201347
(cherry picked from commit e294b2d37cf79182bb9a255adb188eb6afa47c27)
Reviewed-on: http://git-master/r/1210951
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We clear buffers allocated in vidmem in buffer free path.
But to clear buffers, we need to submit CE jobs and this
could cause issues/races if free called from critical
path
Hence solve this by moving buffer clear/free to a worker
gk20a_gmmu_free_attr_vid() will now just put mem_desc into
a list and schedule a worker
And worker thread will traverse the list and clear/free
the allocations
In struct gk20a_vidmem_buf, mem variable is statically
allocated. But since we delay free of mem, convert this
variable into a pointer and allocate it dynamically
Since we delay free of vidmem memory, it is now possible
to face OOM conditions during allocations. Hence while
allocating block until we have sufficient memory
available with an upper limit of 1S
Jira DNVGPU-84
Change-Id: I7925590644afae50b6fc04c6e1e43bbaa1c220fd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1201346
(cherry picked from commit b4dec4a30de2431369d677acca00e420f8e581a5)
Reviewed-on: http://git-master/r/1210950
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We currently clear vidmem pages in gk20a_gmmu_alloc_attr_vid_at()
i.e. allocation path for each buffer
But since buffer allocation path could be latency critical,
clear whole vidmem first and before first User allcation
in gk20a_vidmem_buf_alloc()
And then clear buffer pages while releasing the buffer
In this way, we can ensure that vidmem pages are already cleared
during buffer allocation path
At a later stage, clearing of pages can be removed from free path
and moved to a separate worker as well
At this point, first allocation has overhead of clearing whole
vidmem which takes about 380mS and this should improve once
clocks are raised.
Also, this is one time larency, and subsequent allocations
should not have any overhead for clearing at all
Add API gk20a_vidmem_clear_all() to clear whole vidmem
We have WPR buffers allocated during boot up and
at fixed address in vidmem.
To prevent overwriting to these buffers in gk20a_vidmem_clear_all(),
clear whole vidmem except for the bootstrap allocator carveout
Add new API gk20a_gmmu_clear_vidmem_mem() to clear one mem_desc
Jira DNVGPU-84
Change-Id: I5661700585c6241a6a1ddeb5b7c068d3d2aed4b3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1194301
(cherry picked from commit 950ab61a04290ea405968d8b0d03e3bd044ce83d)
Reviewed-on: http://git-master/r/1193158
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an allocator for allocating vidmem before the CE has had a
chance to be initialized (and clear the rest of vidmem).
Jira DNVGPU-84
Change-Id: I5166607a712b3a6eb4c2906b8c7d002c68a6567b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1197204
(cherry picked from commit b4e68e84eedd952637b2332d8dc73a9090d6d62e)
Reviewed-on: http://git-master/r/1210949
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switch to use page allocator for vidmem
Support GMMU mappings for page (non-contiguous page allocator)
in update_gmmu_ptes_locked()
If aperture is VIDMEM, traverse each chunk in an allocation
and map it to GPU VA separately
Fix CE page clearing to support page allocator
Fix gk20a_pramin_enter() to get base address from new
allocator
Define API gk20a_mem_get_vidmem_addr() to get base address
of allocation. Note that this API should not be used if we
have more than 1 chunk
Jira DNVGPU-96
Change-Id: I725422f3538aeb477ca4220ba57ef8b3c53db703
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1199177
(cherry picked from commit 1afae6ee6529ab88cedd5bcbe458fbdc0d4b1fd8)
Reviewed-on: http://git-master/r/1197647
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Event notifications on TSGs should only be sent to the channel that caused the
event to happen in the first place, not evey channel in the tsg. Any more and
the debugger will not be able to tell what channel actually got the event.
Worse yet, if all the channels in a tsg are bound to the same debug session
(as is the case with cuda-gdb), then multiple nvgpu events for the same gpu
event will be triggered, causing events to be buffered and the client to get
out of sync.
One gpu exception, one nvgpu event per tsg.
Bug 1793988
Signed-off-by: Cory Perry <cperry@nvidia.com>
Change-Id: I4efb83b0593bd1af38f2342c80793d9db56e42b1
Reviewed-on: http://git-master/r/1194203
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The return from __semaphore_bitmap_alloc() is an int for which a
negative value indicates a failure. That return value was being
directly cast to an unsigned int before being checked for a
negative error code. This obviously isn't a good idea.
Coverity ID 38754
Change-Id: I50c0478e5504988b059e69b929e9c2e465df7cc0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1210317
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a possible overflow in the buddy allocator's initialization code. In
practice it should never happen that pde size is greater than 32bits but
this makes coverity happy.
Coverity ID 54964
Change-Id: I886fd962bb3e9e328f7305bdcf69827979a39a21
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1210316
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is done to boost performance of the GPU submit time, which
is critical for compute use-cases.
Bug 200215465
Bug 1804898
Conflicts:
drivers/gpu/nvgpu/gk20a/channel_gk20a.c
Change-Id: Ic4884ee4eac910b92b84a47fdc1b2e9f26b2f1f0
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/1199860
Reviewed-on: http://git-master/r/1209834
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
adding 'ifdef CONFIG_DEBUG_FS' check to fix following compilation error
when CONFIG_DEBUG_FS=n(which is used for Android 'production' build):
mm_gk20a.c: In function 'gk20a_mm_debugfs_init':
mm_gk20a.c:4824:2: error: implicit declaration of function
'debugfs_create_x64' [-Werror=implicit-function-declaration]
Bug 1778001
Change-Id: I785288a37b96c391b84925d5971d2691cf80206e
Signed-off-by: David Pu <dpu@nvidia.com>
Reviewed-on: http://git-master/r/1210393
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having the debug prints from the allocators be
warnings they should be just regular prints.
Bug 1799159
Change-Id: Ic6e3c38fa286c4acd6fcba51dc59158dc2d655fc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1201372
(cherry picked from commit 107caf4ce68a7c76023ee1e66a98c5570f401059)
Reviewed-on: http://git-master/r/1208478
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
One of the flags that is defined for allocators has not yet been
imlpemented. This clarifies the comment and explains why the flag
has been defined even though it is not yet implemented.
Bug 1799159
Change-Id: I1e84439d63ca391941cee8e5362ffd9cc959744b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1201371
(cherry picked from commit 8e6566b173f17d9c169a9fa0f6104f4bbf608dc1)
Reviewed-on: http://git-master/r/1208477
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add checks to make sure function pointers are valid before attempting
to call said function.
Also, ensure that any allocator created defines the following 3 functions
at minimum:
alloc()
free()
fini()
Bug 1799159
Change-Id: I4cd3d5746ccb721c723a161c9487564846027572
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1200059
(cherry picked from commit e26557a49d7ca6629ada24f12a3be396b0ae22cd)
Reviewed-on: http://git-master/r/1208476
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add GPU debugging to the semaphore code.
Bug 1732449
JIRA DNVGPU-12
Change-Id: I98466570cf8d234b49a7f85d88c834648ddaaaee
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1198594
(cherry picked from commit 420809cc31fcdddde32b8e59721676c67b45f592)
Reviewed-on: http://git-master/r/1153671
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new debug message type: gpu_dbg_map_v. This is used for mapping
messages that are not specifically memory map operations.
Also cleanup the memory mapping debugging a bit since there was one
duplicate print and the memory map print was difficult to parse
visually. As a result the message has been modified to put the most
important information first in an easily readable format.
Bug 1732449
JIRA DNVGPU-12
Change-Id: Ib19c9371ee958009ab5a2d89b9610e699d070ee2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1198593
(cherry picked from commit 51dba53b06ca171cdb13d1707f2d026b0ce29f07)
Reviewed-on: http://git-master/r/1147670
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add semaphore debugging information to the gk20a channel state
debug dump.
Bug 1732449
JIRA DNVGPU-12
Change-Id: I7caafd4f6420e1c478be22e236513603c315ce5e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1198592
(cherry picked from commit 3fa247adf5fdd8c9b16a24fec00903fdc3abc90a)
Reviewed-on: http://git-master/r/1133793
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement an allocator suitable for managing the video memory on dGPUs.
It works by allocating chunks from an underlying buddy allocator and
collating the chunks together (similar to what an sgt does in the
wider Linux kernel). This handles the ability to get large buffers in
potentially fragmented memory. The GMMU can then obviously map the
physical vidmem into contiguous GVA spaces.
Jira DNVGPU-96
Change-Id: Ic1d7800b033a170b77790aa23fad6858443d0e89
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1197203
(cherry picked from commit fa44684a843956ae384fef6d7a79b9cbbd04f73e)
Reviewed-on: http://git-master/r/1185231
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add NVIDIA-REVIEWERS to designate nvgpu module owners
bug 200195707
Change-Id: Idfa661f38ec231d79cd53dad7ef487e1ac4138aa
Signed-off-by: Vandana Salve <vsalve@nvidia.com>
Reviewed-on: http://git-master/r/1206269
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Flushing timestamp record method can fail in case FECS is not
processing the main method queue. In particular, this occurs
in case of ctxsw timeout, where we process fifo sched interrupts
from the host, but FECS is still waiting for idle (grWFI).
In such scenario, this adds huge delay in fifo recovery
procedure (timeout on FECS method). Since flushing the last
(incomplete) record from FECS would only be useful in that case
(context switch ongoing), remove flush operation on engine
reset. Note that an explicit ENGINE_RESET event (with pid)
is inserted in user-facing ctxsw buffer on engine reset.
Bug 200228310
Change-Id: I885525f8f197f81266b50db161bb511867fc74f4
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1207305
(cherry picked from commit 44391b6204fd648949295f90481b0c424d9a5ddf)
Reviewed-on: http://git-master/r/1208414
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a channel is part of a TSG, report TSG's
interleave in debugfs for sched parameters.
Bug 200228310
Change-Id: I2eeee7aacfa92f9d5fc367225a23a663ca6ac593
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1207304
(cherry picked from commit 1950ae679f112dcf24a7f3c695d4ab098de10326)
Reviewed-on: http://git-master/r/1208413
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While collecting failing engine data, id type (is_tsg) was not
set for ctxsw and save engine states. This could result in some
ctxsw timeout interrupts to be ignored (id reported with wrong
is_tsg).
For TSGs, check if we made some progress on any of the channels
before kicking fifo recovery.
Bug 200228310
Jira EVLR-597
Change-Id: I231549ae68317919532de0f87effb78ee9c119c6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1204035
(cherry picked from commit 7221d256fd7e9b418f7789b3d81eede8faa16f0b)
Reviewed-on: http://git-master/r/1204037
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set timeout for channel force reset, to prevent submit immediately.
Bug 1778448
JIRA VFND-2097
Change-Id: I729597a68cbdc5ed3f6c878955a10ae7c4659fa4
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1203298
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- avoid dump gr registers for vgpu
- init wdt lock
Bug 1776876
JIRA VFND-2151
Change-Id: I73293e0d23b614129c763cb22b09156a8e1432cc
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1202256
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- let force_reset_ch pass down err code
- force_reset_ch callback can cover vgpu too.
Bug 1776876
JIRA VFND-2151
Change-Id: I48f7890294c6455247198e0cab5f21f83f61f0e1
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1202255
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It'll cover multi-gpcs.
JIRA VFND-2103
Change-Id: Ie82bdaad360294696c5a679d694f6f0e2364ca2e
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1194631
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
move below attributes to constants:
TEGRA_VGPU_ATTRIB_GPC_COUNT
TEGRA_VGPU_ATTRIB_MAX_TPC_PER_GPC_COUNT
TEGRA_VGPU_ATTRIB_MAX_TPC_COUNT
TEGRA_VGPU_ATTRIB_NUM_FBPS
TEGRA_VGPU_ATTRIB_FBP_EN_MASK
TEGRA_VGPU_ATTRIB_MAX_LTC_PER_FBP
TEGRA_VGPU_ATTRIB_MAX_LTS_PER_LTC
JIRA VFND-2103
Change-Id: Ic2ac14a0f8a1cf19a996bcef20bef0003d3b9a3b
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1194630
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
move below attributes to constants:
TEGRA_VGPU_ATTRIB_GPC0_TPC0_SM_ARCH
JIRA VFND-2103
Change-Id: I5d6aa8f4a49e65307989ef02d223c3ee31fcdeed
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1190481
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
move below attributes to constants:
TEGRA_VGPU_ATTRIB_COMPTAG_LINES
TEGRA_VGPU_ATTRIB_L2_SIZE
TEGRA_VGPU_ATTRIB_CACHELINE_SIZE
TEGRA_VGPU_ATTRIB_COMPTAGS_PER_CACHELINE
TEGRA_VGPU_ATTRIB_SLICES_PER_LTC
TEGRA_VGPU_ATTRIB_LTC_COUNT
JIRA VFND-2103
Change-Id: Iecf9717ee553a16ffe8de445be5bfe5a99c3a094
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1190480
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
JIRA VFND-2103
Change-Id: I180324b36a1c6b39300b92a2cff6448d7665679b
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1190479
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Moving getting constant attributes into one cmd which will be
called only once.
This patch adds basic infrastructure and gpu arch info, max_freq
and num_channels support.
JIRA VFND-2103
Change-Id: I100599b49f29c99966f9e90ea381b1f3c09177a3
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1189832
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move vgpu private data to a dedicated structure and allocate it
at probe time. Also add virt_handle helper function which is used
everywhere.
JIRA VFND-2103
Change-Id: I125911420be72ca9be948125d8357fa85d1d3afd
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1185206
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
|