| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This CL covers the following small modifications,
1) Add proper memset size handling during pmu surface cleanup
2) Reset the pmu surface mem desc pointer after deallocate the memory
JIRA DNVGPU-47
Change-Id: I400f8c4d3f5dc650d4fc6669cef6a1e41a70f4ab
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1220100
(cherry picked from commit 1f171b977be51db20c2dfc56b3f6e3dd6b4b9095)
Reviewed-on: http://git-master/r/1240881
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nvgpu driver fails to compile when CONFIG_PM build option is set to 'n'.
Fix this by guarding struct gk20a_pm_ops and the functions pointed by
in that struct with #ifdefs.
Bug 1827482
Change-Id: I27f3535e89cc741f79824cdc427ef3572e2779e6
Signed-off-by: Timo Alho <talho@nvidia.com>
Reviewed-on: http://git-master/r/1237110
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid out of bounds when searching bit header.
JIRA VFND-2826
Change-Id: Icbde7c7e04c35c29f316d8a0ad93c76fcb8fae7a
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1240185
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change makes the invocation of the deferred job clean-up
mechanism conditional. For submissions that require job tracking,
deferred clean-up is only required if any of the following
conditions are met:
1) Channel's deterministic flag is not set
2) Rail-gating is enabled
3) Channel WDT is enabled
4) Buffer refcounting is enabled
5) Dependency on Sync Framework
In case deferred clean-up is not needed, we clean-up
a single job tracking resource in the submit path. For
deterministic channels, we do not allow deferred clean-up to
occur and fail any submits that require it.
Bug 1795076
Change-Id: I4021dffe8a71aa58f12db6b58518d3f4021f3313
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1220920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
(cherry picked from commit b09f7589d5ad3c496e7350f1ed583a4fe2db574a)
Reviewed-on: http://git-master/r/1223941
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add per-platform flag run_preos, which indicates whether to run
preos ucode or not. Leave it to false for all known boards.
Bug 1799537
Bug 1815139
Change-Id: I1818970b0f70f636277443d6de199d3683fc565a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1233410
(cherry picked from commit 8bea05dbfa64af88587edb8927a8ec71c6b0d807)
Reviewed-on: http://git-master/r/1239956
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Return NULL instead of ERR_PTR from __gk20a_alloc_slab to be consistent
with __gk20a_alloc_pages, and thus to work with an error check in
gk20a_page_alloc in out-of-memory conditions.
Bug 1799159
JIRA DNVGPU-100
Change-Id: I8c3c0e121840758c6aba860baac86a38e873e359
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1227730
(cherry picked from commit 209927a6b3bae4fddc2a6a745c1b4b1f46c6675c)
Reviewed-on: http://git-master/r/1235192
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Tested-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity detected a possible overflow during the left shift.
This is likely not a big problem, though, since the number
of pages to allocate would have to be greater than 2^32
(that would be 4 TB of memory assuming 4k page size and the
literal 1 being a signed int by default).
Bug 1799159
Change-Id: Ie1d6522defd13c794eb95aeee8c5c4203db00ebf
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1238632
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes to fence framework's usage of allocator APIs to be
compatible w/ 32-bit architectures.
Bug 1795076
Change-Id: Ia677f9842c36d482d4e82e9fa09613702f3111b3
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1237904
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change fixes up the calculation of gpfifo entries, to be
allocated depending on the ioctl used:
1) For the legacy ALLOC_GPFIFO ioctl, we preserve the calculation
of gpfifo entries within the kernel.
2) For the new ALLOC_GPFIFO_EX ioctl, we assume that userspace has
pre-calculated power-of-2 value. We process this value un-modified
and only verify that it is a valid power-of-2.
Bug 1795076
Change-Id: I8d2ddfdae40b02fe6b81e63dfd8857ad514a3dfd
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1220968
(cherry picked from commit c42396d9836e9b7ec73e0728f0c502b63aff70db)
Reviewed-on: http://git-master/r/1223937
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds a new ioctl flag,
NVGPU_SUBMIT_GPFIFO_FLAGS_DETERMINISTIC, which indicates that a
gpfifo submission must exhibit deterministic behavior within the
kernel.
For submissions that require job tracking and also set
this flag, we require the channel to have previously
pre-allocated job tracking resources.
Bug 1795076
Change-Id: I0496a2513c6c683fcda161b32db9e7ee6712d45c
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1210527
(cherry picked from commit 0a36a0ce3a6cbe398931993e742fc928f7b2c0aa)
Reviewed-on: http://git-master/r/1223935
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for pre-allocation of job tracking resources
w/ new (extended) ioctl. Goal is to avoid dynamic memory
allocation in the submit path. This patch does the following:
1) Intoduces a new ioctl, NVGPU_IOCTL_CHANNEL_ALLOC_GPFIFO_EX,
which enables pre-allocation of tracking resources per job:
a) 2x priv_cmd_entry
b) 2x gk20a_fence
2) Implements circular ring buffer for job
tracking to avoid lock contention between producer
(submitter) and consumer (clean-up)
Bug 1795076
Change-Id: I6b52e5c575871107ff380f9a5790f440a6969347
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1203300
(cherry picked from commit 9fd270c22b860935dffe244753dabd87454bef39)
Reviewed-on: http://git-master/r/1223934
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change is the first of a series of changes to
support the usage of pre-allocated job tracking resources
in the submit path. With this change, we still maintain a
dynamically-allocated joblist, but make the necessary changes
in the channel_sync & fence framework to use in-place
allocations. Specifically, we:
1) Update channel sync framework routines to take in
pre-allocated priv_cmd_entry(s) & gk20a_fence(s) rather
than dynamically allocating themselves
2) Move allocation of priv_cmd_entry(s) & gk20a_fence(s)
to gk20a_submit_prepare_syncs
3) Modify fence framework to have seperate allocation
and init APIs. We expose allocation as a seperate API, so
the client can allocate the object before passing it into
the channel sync framework.
4) Fix clean_up logic in channel sync framework
Bug 1795076
Change-Id: I96db457683cd207fd029c31c45f548f98055e844
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1206725
(cherry picked from commit 9d196fd10db6c2f934c2a53b1fc0500eb4626624)
Reviewed-on: http://git-master/r/1223933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If User does not close the event fd created on channel/TSG,
it is possible that the event stays on channel/TSG
structure and reappears when channel/TSG is re-opened
This causes false failure when we try to enable some event
of channel/TSG since we do not allow enabling same event
twice on same channel/TSG
Fix this by removing all enabled events from channel/TSG
while closing it
Bug 200243092
Bug 1818654
Change-Id: I9d5ffc89f87cf4c44124f8015c2c2f0587ad2ef4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1237723
(cherry picked from commit 2737a5c86cf5fbfe8a04f6a87764e8ecb9b30555)
Reviewed-on: http://git-master/r/1238266
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Raise the minimum VBIOS version for 0x1c35 to 86.06.30.00.
Bug 1811880
Change-Id: I35f7511c3346394af45b8347e8c40f8d367bf3e0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1233401
(cherry picked from commit 9a92ef0f0ccf33c08d48d3a769158b30a0a1496c)
Reviewed-on: http://git-master/r/1239429
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a minimum VBIOS version field for each SKU. This requires the
gk20a_platform structure to be per SKU.
Also sets power_on back to false if there was any error in booting
GPU.
Bug 1811880
Change-Id: I23ef312f0db7061b31a3d503ded7e41ef45ad6b3
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1227229
(cherry picked from commit 69c9ab4349ec7526a7f8a2fcad01f9128ed4769c)
Reviewed-on: http://git-master/r/1239428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable probing of gp106 sku10 by adding the device id
to the known device table.
JIRA DNVGPU-72
Change-Id: I2c10914c8510c6081202a374f50ef40371d7d183
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1221123
(cherry picked from commit 17ff6de69f19212b6c6be39496f8e76c8554b861)
Reviewed-on: http://git-master/r/1239427
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-117
Change-Id: I8c79e5b2fcad25588c950e786289443ed64fd48d
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1223221
(cherry-picked from commit f3185ad9f141ab32a224046185d0a409a8a513ff)
Reviewed-on: http://git-master/r/1227254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move ELCG parameter programming to a new function in therm,
elcg_init_idle_filter. Implement gk20a variant and use it for gk20a
and gm20b.
JIRA DNVGPU-74
Change-Id: I8ef400f3a6195311fb9e7da8db6c34993d62f461
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1220433
(cherry picked from commit f6654ae4d83d31cd40b317bf55922964bbfa575d)
Reviewed-on: http://git-master/r/1239421
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have following bug where GPU Host reports non-idle
when it should report engine idle
- if a context is preempted off the GPU, and there is
no other context to load, NV_PGRAPH_ENGINE_STATUS
will not be idle until new context is loaded
- this could cause gr_gk20a_wait_idle() to fail since
here we rely only on NV_PGRAPH_ENGINE_STATUS to
decide if engine is busy or not
To fix this, first check if context is valid or not
from NV_PFIFO_ENGINE_STATUS_CTX_STATUS
If context is invalid, return immediately
Otherwise, continue as before
Also, add accessors for invalid ctx_status
Bug 1826768
Change-Id: Id627be3f02e79f4beac59a8b5195d08eabf651f2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1237521
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid doing 64bit division in the page allocator. This causes problems
on 32 bit platforms.
Bug 1799159
Change-Id: I5166a71a4e84454686cce6d6cdca678a862a7ae7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1236799
(cherry picked from commit 21c091d1c00433acb9965c3d348d16fbb4c50c1a)
Reviewed-on: http://git-master/r/1236195
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to do "SLAB" allocation in the page allocator. This
is generally useful since the allocator manages 64K pages but often
we only need 4k chunks (for example when allocating memory for page
table entries).
Bug 1799159
JIRA DNVGPU-100
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1225322
(cherry picked from commit 299a5639243e44be504391d9155b4ae17d914aa2)
Change-Id: Ib3a8558d40ba16bd3a413f4fd38b146beaa3c66b
Reviewed-on: http://git-master/r/1227924
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an optimization to the bitmap allocator for handling sequences of
allocations. A common pattern of allocs from the priv_cmdbuf is to do
many allocs and then many frees. In such cases it makes sense to store
the last allocation offset and start searching for the next alloc from
there. For such a pattern we know that the previous bits are already
allocated so it doesn't make sense to search them unless we have to.
Obviously, if there's no space found ahead of the precious alloc's block
then we fall back to the remaining space.
In random allocation patterns this optimization should not have any
negative affects. It merely shifts the start point for searching for
allocs but assuming each bit has an equal probability of being free the
start location does not matter.
Bug 1799159
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1205958
(cherry picked from commit 759c583962d6d57cb8cb073ccdbfcfc6db4c1e18)
Change-Id: I267ef6fa155ff15d6ebfc76dc1abafd9aa1f44df
Reviewed-on: http://git-master/r/1227923
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the ability to swap between different PCIe bus speeds. This
code is called during init in case the GPU is not running at the max
supported PCIe bus speed.
JIRA DNVGPU-89
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1218178
(cherry picked from commit 8dcd3e10f46f524c9bac9fd5dae0f0a899123c23)
Change-Id: I21f96110578a68d5c5e30ae21776cff69aefba5d
Reviewed-on: http://git-master/r/1227922
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We take an extra power refcount when we disable railgating
through railgate_enable sysfs
And that breaks suspend/resume since we check for power
refcount first in gk20a_pm_suspend()
Fix this with following :
- set a flag user_railgate_disabled when User
disables railgating through sysfs railgate_enable
- in gk20a_pm_suspend(), drop one power refcount
if flag is set
- in gk20a_pm_resume(), take one refcount again
if flag is set
Fix __gk20a_do_idle() to consider this extra refcount as well.
Add new variable target_ref_cnt and use it instead of
assuming target refcount of 1
In case User has disabled rail gating, set this target
refcount as 2
Also, export gk20a_idle_nosuspend() which drop power refcount
without triggering suspend
Bug 200233570
Change-Id: Ic0e35c73eb74ffefea1cd90d1b152650d9d2043d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1236047
(cherry picked from commit 6e002d57da4b5c58ed79889728bb678d3aa1f1b1)
Reviewed-on: http://git-master/r/1235219
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add function ptr for enabling gpc exceptions
JIRA GV11B-28
JIRA GV11B-27
Change-Id: I4c7e4300825bf096c22f229ae7196f324ce40037
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: http://git-master/r/1236902
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Page table updates have an explicit write barrier at the end of a pte
update operation in update_gmmu_ptes_locked(), so the per-wr32 wmb()s
are not necessary.
Jira DNVGPU-23
Change-Id: I2e2596f0900d840fadb369ee1261c5e2305f2070
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1225150
(cherry picked from commit 6664a667ea326e9663a6b502765f858d8669f4d9)
Reviewed-on: http://git-master/r/1227475
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A wmb() next to each gk20a_mem_wr32() via PRAMIN may be overly careful,
so support not inserting these barriers for performance, in cases where
they are not necessary, where the caller would do an explicit barrier
after a bunch of reads.
Also, move those optional wmb()s to be done at the end of the whole
internally batched write for gk20a_mem_{wr_n,memset} from the per-batch
subloops that may run multiple times.
Jira DNVGPU-23
Change-Id: I61ee65418335863110bca6f036b2e883b048c5c2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1225149
(cherry picked from commit d2c40327d1995f76e8ab9cb4cd8c76407dabc6de)
Reviewed-on: http://git-master/r/1227474
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
wmb() should come after the writes to ensure that the writes have
completed before progressing.
Bug 1811382
Change-Id: I98fba317b1760240c0b5de531accf398fe69c9b3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
(cherry picked from commit 1b1201b9c109061590e6e25260d7230ae2c89888)
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1225251
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for GPU railgating using common clock framework and Tegra
DVFS on k4.4
Bug: 200233943
Change-Id: Ief9afd7a5bf3f447e9b91ab181f26dcefff0a8c8
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-on: http://git-master/r/1232290
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add NVGPU_GPU_IOCTL_GET_MEMORY_STATE to read the amount of free
device-local video memory, if applicable.
Some reserved fields are added to support different types of queries in
the future (e.g. context-local free amount).
Bug 1787771
Bug 200233138
Change-Id: Id5ffd02ad4d6ed3a6dc196541938573c27b340ac
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1223762
(cherry picked from commit 96221d96c7972c6387944603e974f7639d6dbe70)
Reviewed-on: http://git-master/r/1235980
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Amount of free space in the buddy allocator is computed from the
complete capacity minus currently used bytes.
The page allocator just queries its underlying allocator.
Bug 1787771
Bug 200233138
Change-Id: I9b6f5ef90119236a13de14e14cd0a3ee72144a11
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1223761
(cherry picked from commit 0b324a60ebdf67e793ade869c252a8ddd56c04f8)
Reviewed-on: http://git-master/r/1235979
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add gk20a_alloc_space() to query the amount of free memory in bytes in
gk20a_allocator.
Bug 1787771
Bug 200233138
Change-Id: Ia381cafd5d2dbf394072d07be96991974f9289ae
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1223760
(cherry picked from commit 0c7fd96acc3e3d581bd25ccbe40b0821a310760f)
Reviewed-on: http://git-master/r/1235978
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change clears_pending to bytes_pending and track accordingly the number
of bytes to be freed instead of the number of buffers. This, atomically
combined with the amount of space in the allocator, is the total amount
of free memory available.
Bug 200233138
Change-Id: Ibbb4e80a32728781ba19a74307d8a8ac1a4d7431
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1231422
(cherry picked from commit 025e765f312c253b201ecf2dbbe0f4972fe1d4bc)
Reviewed-on: http://git-master/r/1235957
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The boolean flag mm_gk20a.vidmem.cleared is shared across threads, so
mark it volatile to prevent compiler from wrongly optimizing accesses to
it.
Jira DNVGPU-84
Change-Id: I1fe66b26966685d3f74ed95ba53b198f810231b9
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1233016
(cherry picked from commit dc6c9db56ea8a5f55f28f97fdfc3c1ac60d8b195)
Reviewed-on: http://git-master/r/1235317
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are eight tiles per map tile register and
depending on how many tpcs are present, there is
a chance that s/w will be accessing un-allocated
memory for reading tile values from temp buffers.
Bug 1735760
Change-Id: I5c0e09ec75099aaf6ad03dde964b9e93c2dc2408
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: http://git-master/r/1221580
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Query sw veid bundles from sim/netlist and
initialize hardware with those bundles.
JIRA GV11B-11
Change-Id: I26f174781f0b00b919afac407e2bb9e1fa7b158a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1231597
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lowest page table level may hold very few entries for mappings of
large pages, but a new page is allocated for each list of entries at the
lowest level, wasting memory and performance. Compact these so that the
new "allocation" of ptes is appended at the end of the previous
allocation, if there is space.
4 KB page is still the smallest size requested from the allocator; any
possible overhead in the allocator (e.g., internally allocating big
pages only) is not taken into account.
Bug 1736604
Change-Id: I03fb795cbc06c869fcf5f1b92def89a04583ee83
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1221841
(cherry picked from commit fa92017ed48e1d5f48c1a12c512641c6ce9924af)
Reviewed-on: http://git-master/r/1234996
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for setting-up chip specific rop
mapping.
JIRA GV11B-21
Change-Id: If94f0de7d767f572095602a831ad6be4b764fff4
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1234547
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Protect the initial vidmem zeroing performed during the first userspace
alloc with a mutex, so that it blocks next concurrent users and is run
only once. Otherwise, multiple clears could end up running in parallel,
so that the next ones corrupt memory allocated by the thread that has
finished earlier and advanced to allocate and use memory.
Jira DNVGPU-84
Change-Id: If497749abf481b230835250191d011c4a9d1483b
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1232461
(cherry picked from commit 79435a68e6d2713b78acdb0ec6f77cfd78651d7f)
Reviewed-on: http://git-master/r/1234990
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
css_vgpu.c: fix warnings found by sparse tool
Bug 200088648
Change-Id: I4d9d33c6438be8fa8f2ba9cb0b2f786c46b7c9ce
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1234605
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Retrieve VBIOS version from biosdata VBIOS structure.
Bug 1811880
Change-Id: I24f4114ce7c8925bde4b195888da62454707b8e6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1218062
(cherry picked from commit fb23e9522bc268fcf0d71cc7f2ae9a0bc6cfda23)
Reviewed-on: http://git-master/r/1234089
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-74
Change-Id: I67c65db4bd3389105299753ca92e94aac362b12e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1220434
(cherry picked from commit 3fd5dc515fc4f40ebf7a7e447f6a3182c5fe3c08)
Reviewed-on: http://git-master/r/1234090
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added current temperature reading support for gp10x.
JIRA DNVGPU-48
Change-Id: If101a68a8a25d741ad5d3d79087142604d7da398
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: http://git-master/r/1213713
(cherry picked from commit 0048cfdb1b642be896da8300b29aaae9ba43a979)
Reviewed-on: http://git-master/r/1234093
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When bar1 memory is not supported then userd will be
allocated from sysmem.
Functions gp_get and gp_put are updated accordingly.
JIRA GV11B-1
Change-Id: Ia895712a110f6cca26474228141488f5f8ace756
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1225384
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A use-after-free scenario is possible where one thread in
gk20a_free_error_notifiers() is trying to free the error
notifier and another thread in gk20a_set_error_notifier()
is still using the error notifier
Fix this by introducing mutex error_notifier_mutex for
error notifier accesses
Take mutex in gk20a_free_error_notifiers() and in
gk20a_set_error_notifier() before accessing notifier
In gk20a_init_error_notifier(), set the pointer
ch->error_notifier_ref inside the mutex and only
after notifier is completely initialized
Bug 1824788
Change-Id: I47e1ab57d54f391799f5a0999840b663fd34585f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1233988
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix below sparse warning by including nvgpu_common.h
from nvgpu_common.c
nvgpu/drivers/gpu/nvgpu/nvgpu_common.c:105:5: warning: symbol
'nvgpu_probe' was not declared. Should it be static?
Bug 200088648
Change-Id: I81f20a5be1c16ba33d6c17a6c72836107878d1df
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1233960
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Suppress error message when nvgpu tries to load VBIOS overlay, but
one is not found. This situation is normal. This is done by moving
gk20a_request_firmware() to be nvgpu generic function
nvgpu_request_firmware(), and adding a NO_WARN flag to it.
Introduce also a NO_SOC flag to suppress attempt to load firmware
from SoC specific directory in addition to the chip specific
directory. Use it for dGPU firmware files.
Bug 200236777
Change-Id: I0294d3308f029a6a6d3c2effa579d5f69a91e418
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1223840
(cherry picked from commit cca44c3f010f15918cdd2259c15170ba1917828a)
Reviewed-on: http://git-master/r/1233353
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-72
JIRA DNVGPU-73
Change-Id: I5932779f6913b55692f69fac692a1a66a9912fc4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1216562
(cherry picked from commit d44be7714afa1f4257a81799c326b453da3d2d5a)
Reviewed-on: http://git-master/r/1233350
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JIRA DNVGPU-118
move vidmem allocation for pmuboardobj to cmd specific
functions and do a copy of data from pmu incase of
getstatus. fixes for getstatus boardobjgrp implementation
and added one #define for rail id to make getstatus of vf table
more meaningful
Change-Id: I366a022c13e51e823116ce2354794babc48981a2
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/1209841
(cherry picked from commit 8c12599f801decc77bbc1acfd1937dfefb21f35e)
Reviewed-on: http://git-master/r/1231839
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the known dGPU SKUs to the PCIe device id table, and remove the
wildcard ANY_GPU_ID wildcard. This makes nvgpu to not try to probe on unknown
GPUs.
JIRA DNVGPU-72
Change-Id: Ie32c3137e9fa89a9e6dcf1e578c0b9d7339d7e75
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1219129
(cherry picked from commit 5c56088fbf8cb815d8be3355ecbb597fb7bfc795)
Reviewed-on: http://git-master/r/1231042
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|