| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an upper limit for cde contexts, and wait for a while if a new
context is queried and the limit has been exceeded. This happens only
under very high load. If the timeout is exceeded, report -EAGAIN.
Change-Id: I1fa47ad6cddf620eae00cea16ecea36cf4151cab
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/601719
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
| |
Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/554185
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
| |
Query interrupt number and reset id from HW. Use the number
from HW when enabling and detecting interrupts.
Bug 200036089
Bug 1567274
Change-Id: If9cb4db79a19dcb193ba7ad9db7081f4fe1ab433
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/600988
|
|
|
|
|
|
|
|
|
|
| |
Bug 200052943
Change-Id: Ied6454bbfb5df9ab29497ecbf2aac495f6d89362
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/602887
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Create debugfs nodes for ctx_count, ctx_usecount and ctx_cont_top.
Change-Id: I1360853b2650d37a96c8adf76368d48d9b457909
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/602860
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rescheduling the temp context deleter when it is not immediately
possible is not necessary, and complicates things. Don't do it.
The context would anyway be used later when its time comes in the free
list, and the deletion would then be retried.
This simplifies canceling the works when shutting down or going into
suspend, since re-canceling the possibly rescheduled work is not needed.
Releasing the app mutex is still necessary when deleting the whole cde.
Bug 200052943
Change-Id: I06afe1766097a78d7bcb93f3140855799ac903ca
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/601035
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cancel the temporary context deleter work when acquiring a channel for
use, to prevent re-scheduling when the same context would be quickly
re-used and finished twice or more in a row before deletion.
Change-Id: Iadd8230d9462adc451e506152a24c50a920a59e3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/600273
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If any gk20a update callback work is pending, wait for them to finish
before going to suspend.
Bug 200052943
Change-Id: Ib469db6e29d13ae26aaca5fceb1ccd20f18bfc3c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/601034
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Release rw_sema properly when block allocator runs out of memory and
returns error.
Change-Id: I6b7cf9564ae25ad1ba30edfcb1ae8a20cf7dc9db
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/601792
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gops->gr.detect_sm_arch was not populated for vgpu. Also,
populate some members of the PMU VM struct as they are used
to report GPU characteristics to userspace.
Bug 1576949
Change-Id: I9ddc361d1418b942da97a82b553aac81f5f51182
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/601931
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Semaphore fences use int for timeout. It should be long instead.
Bug 1567274
Change-Id: Ia2b2c5ceeb03b4d09c1d8933ce33310356dd7e01
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/595980
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable gpu rail gating by setting rail-gating delay to
500msec instead of INT_MAX.
Bug 1552464
Bug 200040882
Change-Id: I64e779fc5b3a0c04997d8874025c71812948602a
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/552700
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Mitch Luban <mluban@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nvgpu exposes debug dump functionality. Currently this function
misses NULL pointer checks and therefore in cases where the driver
is compiled but the device is disabled, the driver crashes kernel.
This patch adds the missing NULL pointer check.
Change-Id: I32acb5cad62b2a29603d6439a5c7e45e016235dd
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/599370
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mohan Nimaje <mnimaje@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rewrite gr_gm20b_ctx_state_floorsweep() to include necessary
register writes for gm20b tpc floorsweeping.
This includes :
- update the loop to write gr_gpc0_tpc0_sm_cfg_r()
and gr_gpc0_gpm_pd_sm_id_r()
- for gr_pd_num_tpc_per_gpc_r(i), we just need to write
register with i = 0 and the value being written is tpc
count in that gpc
- gr_fe_tpc_fs_r() needs to have logical list of TPCs after
floorsweeping. Get this value from pes_tpc_mask.
- gr_cwd_gpc_tpc_id_tpc0_f() and gr_cwd_sm_id_tpc0_f()
also refer to logical ids and hence no need to check
tpc_fs_mask to configure these registers
Bug 1513685
Change-Id: I82dc36a223fbd21e814e58e4d67738d7c63f04a7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/601117
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Follow steps below to config active TPC number:
echo 1 > /sys/devices/platform/host1x/gpu.0/force_idle
echo 0x1/0x2/0x3 > /sys/devices/platform/host1x/gpu.0/tpc_fs_mask
echo 0 > /sys/devices/platform/host1x/gpu.0/force_idle
where,
0x1 : disable TPC1
0x2 : disable TPC0
0x3 : both TPCs active
Also, add API set_gpc_tpc_mask to update the TPCs and call this
API after update to sysfs "tpc_fs_mask"
Once fuses are updated for new TPC settings, we need to
reconfigure GR and golden_image. Hence disable gr->sw_ready
and golden_image_initialized flags.
Also, initialize gr->tpc_count = 0 each time in
gr_gk20a_init_gr_config(), otherwise it goes on adding tpc count
Bug 1513685
Change-Id: Ib50bafef08664262f8426ac0d6cbad74b32c5909
Signed-off-by: Kevin Huang <kevinh@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/552606
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Bug 1555318
Change-Id: I80655e047963619b5a3d7e9155db13c9396417fe
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/598970
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GM20b GPCPLL NA mode should not be enabled on Tegra21 parts with
speedo revision 0 or 1, even when CONFIG_TEGRA_USE_NA_GPCPLL is set.
Respectively, in this case non-NA GPU DVFS table must be selected.
To accommodate this restriction added GPU speedo id 1, and mapped
parts with revision 2 and above to this new speedo id. Kept speedo id
0 for parts with revision 0 or 1. Only non-NA DVFS table is selected
for speedo id 0. Either non-NA or NA mode DVFS table can be selected
by CONFIG_TEGRA_USE_NA_GPCPLL setting for parts with speedo id 1.
GM20b GPCPLL mode selection procedure is updated accordingly, so that
NA mode is disabled for speedo id 0, and selected for speedo id 1 by
CONFIG_TEGRA_USE_NA_GPCPLL. The latter takes precedence over GPCPLL
ADC calibration fuses - if config option is set, and part has speedo
id 1, NA mode is enabled even if calibration fuses are not burnt (less
accurate s/w self-calibration is used in this case).
Bug 1555318
Change-Id: I3948cb945206d0bc0f9f2bb6da5505c50ffc2af1
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/594718
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
|
|
|
|
|
|
|
|
| |
Replace gk20a allocator with Linux bitmap allocator.
Change-Id: Iba5e28f68ab5cf19e2c033005efd7f9da6e4a5b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/554184
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement NVGPU_AS_IOCTL_GET_VA_REGIONS which returns a list of GPU VA
regions for different page sizes. This is required for the userspace
for safe fixed-address address space allocation.
Bug 1551752
Change-Id: I63ddde30935db2471bec498dae0caa870e89c1a5
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/590814
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add sanity check for big_page_size parameter to avoid invoking
gk20a_init_vm() with a bogus big page size, potentially hitting a
BUG_ON there. Also, reorganize the code a bit to avoid memory leak in
case of bogus big page size, and properly forward the return value
from gk20a_init_vm().
Change-Id: I4eeada75415d2e9539b5e8859099cce35cd86db3
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/594469
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gk20a_vm_alloc_share() fails when the default big page size is
requested but ops.mm.set_big_page_size is unset. Rework the logic a
bit to allow userspace to explicitly request the default big page
size, too.
Change-Id: I2a28c6d979fbf1dde5559ce9eb5f1310d232e27f
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/590456
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the following info into GPU characteristics: available big page
sizes, support indicators for sync fence fds and cycle stats, gpc
mask, SM version, SM SPA version and warp count, and IOCTL interface
levels. Also, add new IOCTL to fetch TPC masks.
Bug 1551769
Bug 1558186
Change-Id: I8a47d882645f29c7bf0c8f74334ebf47240e41de
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/562904
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Comptags are assigned per 128k. For 64k pages this means we need to
assign same index to two pages. Change the logic to use byte sizes.
Change-Id: If298d6b10f1c1cad8cd390f686d22db103b02d12
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/594403
|
|
|
|
|
|
|
|
|
|
|
| |
Some kernel APIs rely on user space knowing class numbers. Allow
querying the numbers from kernel.
Bug 1567274
Change-Id: Idec2fe8ee983ee74bcbf9dfc98f71bbcc1492cfb
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/594402
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
use a correct, negative error sign in ENOMEM when gk20a_gmmu_map runs
out of memory.
Change-Id: I4fa8a2cf359a5c98cebdf64d4e3fcc96f478f779
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/594397
Reviewed-by: Jussi Rasanen <jrasanen@nvidia.com>
Tested-by: Jussi Rasanen <jrasanen@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using CDE firmware v1, combine H and V swizzling passes into one
pushbuffer submission. This removes one GPU context switch, almost
halving the time taken for swizzling.
Map only the compbit part of the destination surface.
Bug 1546619
Change-Id: I95ed4e4c2eefd6d24a58854d31929cdb91ff556b
Signed-off-by: Jussi Rasanen <jrasanen@nvidia.com>
Reviewed-on: http://git-master/r/553234
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a channel timeout occurs, reload only the particular context/channel
where the timeout occurred, instead of destroying whole cde. Reloading
happens by allocating a replacement context and marking the offending
channel as soon-to-be-deleted.
Clean up the code by using two separate lists for free and used
contexts. Rename channel deallocation/allocation functions to better
describe what they do, and annotate the functions that need locking.
Also do not wait for channel idle before submitting, since the acquired
context has a ready channel already.
Bug 200046882
Change-Id: I4155a85ea0ed79e284309eb2ad0042df3938f1e2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/591235
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove both bar1 and pmu.
Bug 1476801
Change-Id: I0c194db06b576083ddaab3726b8575ebce473d84
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/592114
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to kfree pmu.vm or bar1.vm is not allowed, since they are not
directly allocated. Separate the vm kfree from the actual vm support
removal, so that they can be done individually.
Change-Id: I7628f546b94e0de909371ce315e4cb065e5ef953
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/592112
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some vm's do not have big pages.
Bug 1476801
Change-Id: Ic82ca7a1380834ea30582631af224c81fd01e4bb
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/592113
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Bug 1555318
Change-Id: I0338e5d46c7f7d910faada0205dccf28aa62d6c2
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/594746
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gk20a_do_idle(), we wait for platform->railgate_delay
to allow GPU to go into rail gate
But sometimes we set platform->railgate_delay = INT_MAX
to disable GPU rail gating but allow it to suspend during
low power state
Due to this, force_idle API fails (it waits for INT_MAX)
To fix this, allow forcing CAR reset instead of rail gating
with flag "force_reset_in_do_idle" defined in gk20a_platform
Set this flag for gm20b until we fix the railgate_delay
Bug 1517584
Change-Id: I031aa56f87d4db3727e2c3a3e5eeaf18503dd449
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/593704
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
bug 200048467
Change-Id: I39f85a638b6bc97442ebf8e4a78e07c8575e4b20
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/592751
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Manually removed duplicate entries in regops whitelist.
Once RM tools is available, then whitelist update will
happen through script.
Bug 1500195
Change-Id: I913c48365e43febcd350a9bfc73d42a27f24e2f7
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/592972
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not call PMU ELPG calls if ELPG should be disabled. Also skips
initialization of PMU ucode if PMU is disabled.
Bug 1567274
Change-Id: Ia9cd3b553c358142ee05a1b0e0832f9412f7cf17
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/593335
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix below sparse warnings :
warning: Using plain integer as NULL pointer
warning: symbol <variable/funcion> was not declared. Should it be static?
warning: Initializer entry defined twice
Also, remove dead functions
Bug 1573254
Change-Id: I29d71ecc01c841233cf6b26c9088ca8874773469
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/593363
Reviewed-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During gpu suspend, cancel all pending delayed cde work
to avoid issues of scheduling this delayed work
during suspend/resume when gpu is not ready.
Bug 1574000
Change-Id: I2b6bfa489435a781dc576a077f9af01b1e1628ce
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/593557
Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To assert reset on GPU, we store "gpu_ref" clock in
platform->clk[0] and use it to assert/deassert reset
But for gm20b, "gpu_ref" is no longer resettable.
To fix this, add two callbacks in gk20a_platform :
.reset_assert and .reset_deassert
Also, add a pointer "clk_reset" to store the clock
which needs to be reset
For gk20a specific implementation, we continue to
reset platform->clk[0]
For gm20b specific implementation, we first request
"gpu_gate" clock, store it and use it to assert reset
Bug 1513685
Bug 1517584
Change-Id: I15a583a4a07eb663b442084be8b8c7d0c7c7a142
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
Bug 1572701
Change-Id: Id135eb2328765d00349b478d695914f7f8c5edf0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/592095
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
| |
vm->gmmu_page_sizes was not initialized properly in the
vgpu case, leading to gmmu map failures.
Bug 1570878
Change-Id: I16c371f65d884f59d9c9f60c7acd391b917d04ed
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vgpu "poweron" and "poweroff" routines now get invoked
using the PM domain callbacks, instead of the obsolete
gk20a_get_client/gk20a_put_client routines.
Bug 1570878
Change-Id: I9a5254936904f75cb3c8a14c2bf5066f919b6588
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/590492
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
| |
Regenerate HW headers after adding SM debugger registers.
Change-Id: Icc47c11f8e9ff52c0cf1f3a54233fb781c2c2b67
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Add WARN_ON to a critical error condition to get a backtrace dump.
Bug 200046882
Change-Id: I76c4186024547c6e89f1465612fe17f44e27eefe
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
| |
sysfs_remove_link's first argument expects the kobj of the directory
where the link resides, not the kobj of the link itself.
Change-Id: I89f7d681135e8eb0ff16406271cd19bf9c04f185
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/592111
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Protect callback races from spurious gk20a channel updates by testing if
the channel update callback still exists when in the scheduled work
(instead of only when scheduling the work to the queue), and by
canceling the work when the channel is freed. Protect access to the
callback and its data by accessing them together inside
spinlock-protected regions.
Bug 200051384
Change-Id: Ib4e1571c35f662195e1dec1e362df32ddc099eb3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/592026
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of current preallocated array plus dynamically allocated
temporary contexts, use a linked list in LRU fashion, always storing
free contexts at the beginning of the list. Initialize the preallocated
contexts to the list and store dynamically allocated temporaries there
too for quick reuse as needed, with a delayed scheduled work for
deleting temporaries when the high load has diminished.
Bug 200040211
Change-Id: Ibc75a0150109ec9c44b2eeb74607450990584b18
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/562856
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of gk20a_clk_get_rate() use the generic clk_get_rate().
Bug 1567274
Change-Id: If955790408d2f4a5d917ea3993573ac3f254c7d3
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/592094
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With CG enabled sometimes fifo could not be idled
during firmware load.
Bug 200042729
Change-Id: I43d7551c0c7c19314c52ac5f678afed8c6df6415
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/559077
Reviewed-by: Automatic_Commit_Validation_User
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Kernel support for allowing a GPU debugger to suspend and resume
SMs. Invocation of "suspend" on a given channel will suspend all
SMs if the channel is resident, else remove the channel form the
runlist. Similarly, "resume" will either resume all SMs if the
channel was resident, or re-enable the channel in the runlist.
Change-Id: I3b4ae21dc1b91c1059c828ec6db8125f8a0ce194
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Signed-off-by: Mayank Kaushik <mkaushik@nvidia.com>
Reviewed-on: http://git-master/r/552115
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the number of TPCs is different between GM20B and GK20a,
the function to look up the number of TPCs needs to be halified.
Change-Id: I19dab9a7105814f86c08c92283a0bb70abb6aa00
Signed-off-by: Mayank Kaushik <mkaushik@nvidia.com>
Reviewed-on: http://git-master/r/500064
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|