| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
FB physical access register for simulation was programmed in GR
implementation. Move it to FB where it belongs.
JIRA NVGPU-714
Change-Id: Ic5146a61c7d45eadffdb4f3b6b08906bfcdbc224
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some use cases client will disable and preempt TSG and then re-enable it
using IOCTLs provided
In case there is only one context getting re-enabled and there is no other job
submission in parallel runlist fetcher will just sleep until doorbell is
received next time
This causes above mentioned test cases to stall after re-enabling TSG
until some one submits a new job and triggers a doorbell
Fix this by explicitly triggering doorbell in vgpu code
after we enable all channels in TSG.
Bug 2205192
Change-Id: I25d643e06152adc6aaf874baf610316f6cd8f13f
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772948
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable replayable fault only for contexts where they are requested.
This required moving the code to initialize subcontexts to happen
later.
Fix signedness issues in definition of flags.
JIRA NVGPU-714
Change-Id: I472004e13b1ea46c1bd202f9b12d2ce221b756f9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1773262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not allow enabling replayable page faults in instace block.
JIRA NVGPU-714
Change-Id: I9c48497e31798ab354a86d460a299e65774b388a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772863
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do not utilize or test replayable page faults in Pascal. Remove the
code related to that.
JIRA NVGPU-714
Change-Id: I2415bde347f8b018ebf99c3f9038d47c649d9464
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1769697
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gr_gv11b.c had a direct dependency to fb_gv11b.c because it calls FB
to process replayable faults while waiting for SM lockdown. Redirect
that call via HAL to remove the dependency.
JIRA NVGPU-714
Change-Id: Ie6df3658f06b1f867893bc98fe581c95813f0431
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772884
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gp10b and gv11b variants of remove_bar2_vm are now identical, so delete
the gv11b version and use only gp10b version.
JIRA NVGPU-714
Change-Id: Ie98cb29803358ddcad8aae2cf865f3baeddebfb1
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1773007
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gops.xve.enable_shadow_rom and gops.xve.disable_shadow_rom HALs could be NULL
on some platforms
Execute them only if they are defined
Jira NVGPUT-120
Change-Id: I683d74a850372f442291a419951a2376805eb1e5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772559
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Commit c61e21c868246faf7a9ffc812590941fc362af17 fixed race codition in PMU
state transition.
- The race condition is such that PMU response(intr callback for messages) can
run faster than kthread posting commands to PMU and thus PMU message callback
may skip important pmu state change.
- Commit c61e21c868246faf7a9ffc812590941fc362af17 introduced a fix where PMU
state change was only updated from callback while other places can only update
pmu_state variable
- However, this commit introduced a regression as follows:
- When PMU state is PMU_STATE_INIT_RECEIVED, we loop over every engine
supported by GPU --> If state = PMU_STATE_INIT_RECEIVED, change the state
to PMU_STATE_ELPG_BOOTING and init ELPG else If state != PMU_STATE_INIT_RECEIVED
throw an error saying "PMU INIT not received"
- Now, if GPU supports multiple engines, first engine will check that
pmu_state is PMU_STATE_INIT_RECEIVED and change it to PMU_STATE_ELPG_BOOTING
However, from second engine onwards, since state is already changed to
PMU_STATE_ELPG_BOOTING, all engines except first engine start throwing
error "PMU INIT not received"
- This patch fixes the issue by changing pmu state from
PMU_STATE_INIT_RECEIVED to PMU_STATE_ELPG_BOOTING only once.
Bug 200372838
JIRA EVLR-2164
Change-Id: Ic8c954d14acb1d6ec3adcbc4bcf4d4745542d9f0
Signed-off-by: Deepak Bhosale <dbhosale@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1769814
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While removing nvgpu driver, devm mapped reg mappings
are released on driver_unregister. For iGPU, these
regs are explicitly unmapped with iounmap(). This
results in "Trying to vfree() nonexistent vm area"
warnings on driver removal.
Address this by using devm* variants to map all IO regions
of both iGPU and dGPU and let the driver unregister
release these mappings.
Also, lock out GPU regs in driver removal path.
Bug 1987855
Change-Id: I0388daf90bea3eaf8752255059cfd3ceabf66e7d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730539
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some use cases client will disable and preempt TSG and then re-enable it
using IOCTLs provided
In case there is only one context getting re-enabled and there is no other job
submission in parallel runlist fetcher will just sleep until doorbell is
received next time
This causes above mentioned test cases to stall after re-enabling TSG until
some one submits a new job and triggers a doorbell
Fix this by explicitly triggering doorbell from gv11b_fifo_enable_tsg() after
we enable all channels in TSG
Bug 2205192
Change-Id: I08e70e3d0f7e4dc6471e63809e246430cc4200c1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772378
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During probe of the driver, set g->log_mask to the default
value of log_mask i.e. NVGPU_DEFAULT_DBG_MASK.
Bug 1987855
Change-Id: Ia92fff2427e10f4fa9828b7b8d95f8f7b0276915
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770805
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocations in init_runlist can fail. Check for such
a failure during fifo setup is being done.
Bug 1987855
Change-Id: I1771a15ebeac81ab2e3ebc9a75363445a0b6f20d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770801
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__dma_dbg() logs func and line details of itself. Update it
to report caller details.
Bug 1987855
Change-Id: I51913b0c57c12e11880699caed557da9491304cf
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1771511
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gv11b_mm_fault_info_mem_destroy() and gv11b_mm_mmu_hw_fault_buf_deinit()
serve a similar purpose of disabling hub interrupts and deinitializing
memory related to MMU fault handling.
Out of the two the latter was called from BAR2 deinitialization, and the
former from nvgpu_remove_mm_support().
Combine the functions and leave the call from nvgpu_remove_mm_support().
This way BAR2 deinitialization can be combined with gp10b version.
JIRA NVGPU-714
Change-Id: I4050865eaba404b049c621ac2ce54c963e1aea44
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1769627
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rename os/linux/vidmem.c to os/linux/dmabuf_vidmem.c. The code is
mainly dealing with interfacing with Linux dmabuf framework and its
responsibilities got confused with common/mm/vidmem.c.
Also move the header include/nvgpu/linux/vidmem.h to
os/linux/dmabuf_vidmem.h. It does not expose any interface to outside
Linux code.
Change-Id: I2cb1057a8934d5cb5c5860023aa12f8f048a6684
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vidmem.h had a forward declaration for a Linux specific struct
work_struct. Removed that.
vidmem.h also #included nvgpu_mem.h even though there was no use
for it. As a follow-up css_gr_gk20a.h did refer to nvgpu_mem but
did not #include it, so added that.
Change-Id: Ifea88adae86ed95302465641821fbb107d7cc233
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768260
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ioctl_dbg.h contained several unnecessary #includes. Replace them
with forward declarations. Also move all definitions only used
by ioctl_dbg.h to ioctl_dbg.c.
Change-Id: I799c8574e985f394eb653a7b7c54816ff409b058
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768259
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function nvgpu_sgt_create() does not exist; we never create empty
nvgpu_sgts. Delete its declaration.
Change-Id: Ib3ea975b442ffd8d50e6e1002ace10d5642f3613
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770666
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gr_gv100_get_active_fpba_mask(), we currently use num_fbpas passed by the
caller which is usually litter (max possible on h/w) value
We should instead read the number of FBPAs from h/w instead of reading litter
value
Jira NVGPUT-117
Change-Id: I6ecd4db0fd939e1dfebf31d27e0022ae02809399
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762721
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In nvgpu_dma_alloc_flags_vid_at(), we check pending bytes of vidmem which are
yet to be cleared by reading g->mm.vidmem.bytes_pending.atomic_var
If there is something to be cleared we return EAGAIN otherwise we return ENOMEM
But to store above variable we use "int before_pending" which evaluates to zero
for sizes like 4GB and we end up returning ENOMEM instead of EAGAIN
Fix this by declaring before_pending variable as u64
Bug 200427361
Change-Id: I6ffe977e3663a5135fa17699ecafe78ac90d9314
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770384
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add new HAL gops.fb.mmu_invalidate_replay() to invalidate replay mmu fault
Use existing API gv11b_fb_mmu_invalidate_replay() to set to this HAL on all
Volta chips
Bug 2228914
Jira NVGPU-838
Jira NVGPUT-73
Change-Id: I394901857d41499f3ea44023393fe271fb664260
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1767970
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gr_gk20a_split_ppc_broadcast_addr() we convert a PPC broadcast address to
its corresponding unicast address list
But we consider gr.pe_count_per_gpc instead of actual number of PPCs and that
leads to generating incorrect list of addresses
Fix this by using gr.gpc_ppc_count[gpc_num] which gives correct number of
PPC count
Jira NVGPUT-117
Change-Id: If7e7c19244b90cb3c405dcba4ae7a86c782972f7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1767838
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added prefix gp106_ to sec2_wait_for_halt()
& sec2_clear_halt_interrupt_status() for gp106
SEC2 HAL
- Made changes to gp106_sec2_wait_for_halt() to
read SEC2 falcon mailbox using common falcon
mailbox access functions.
- Add define for falcon mailbox
- These changes are done to reuse gp106 HAL's
for GPU_NEXT.
Change-Id: Id32a7636d775b482684212ed4ef5d01c8ea65335
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1755618
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-Do sec2 reset using common falcon HAL
nvgpu_flcn_reset() by passing sec2_flcn
struct which holds base address of SEC2
falcon as per chip specific.
JIRA NVGPUT-111
Change-Id: I2b95262a93644bbefed5b6c46dc73200afd97730
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1755617
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gr_gk20a_find_priv_offset_in_buffer() we right now calculate
offset of a register in gpccs segment based on register address type
Separate out sequence to find offset in gpccs segment and move it to new API
gr_gk20a_get_offset_in_gpccs_segment()
Introduce new HAL gops.gr.get_offset_in_gpccs_segment() and set above API
to this HAL
Call HAL from gr_gk20a_find_priv_offset_in_buffer() instead of calling direct
API
Jira NVGPUT-118
Change-Id: I0df798456cf63e3c3a43131f3c4ca7990b89ede0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761669
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug 200327089
Change-Id: Id7bd2795647f1e29dabe41acb20d0994cdd07958
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1764267
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The g->can_railgate flag is a global constant-ish property like the rest
of the flags behind nvgpu_is_enabled() API, so move it there.
Bug 200327089
Change-Id: Id1f2f16ea1975a03fb56f10c2f3c8c705574e341
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1764266
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Kernel mode submits conflict with user submits, so don't allow them if a
channel user has asked for usermode submit support.
Bug 200145225
Change-Id: I3a99222b09260a1b3e116c6aa86d8da5d380d903
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1767907
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable ECC interrupt in Falcon interrupt source
- Enable routing of ECC interrupt to HOST.
code CL: https://git-master.nvidia.com/r/#/c/1758176/
p4 CL# 24408680
Change-Id: Ib43c80be64e29ccbc6b19168e67ac6f4d200b2d8
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1758175
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nvgpu_mem_rd*() functions were implemented per OS. They also used
nvgpu_pramin_access_batched() and implemented a big portion of logic
for using PRAMIN in OS specific code.
Make the implementation for the functions generic. Move all PRAMIN
logic to PRAMIN and simplify the interface provided by PRAMIN.
Change-Id: I1acb9e8d7d424325dc73314d5738cb2c9ebf7692
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1753708
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The last frequency in the local array with all frequencies was missed
as the index was used as count. This caused max frequency to be not
listed among available frequencies, when few frequencies were
configured in BPMPFW-DT.
Bug 200381453
Change-Id: I72d000ed1842c41555f2de36209fa4e12188c325
Signed-off-by: Vishruth <vishruthj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1767642
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For V/f point with gpc2clk exactly equal to DVCO min, we observe lower
than expected effective clock. Make sure min frequency for gpc2clk is
greater than DVCO min.
Bug 200412996
Change-Id: I85c33852c56c7a642aa5c85987d2da4147e73c22
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1764923
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When requesting a gpc2clck below lowest V/f point, clock arbiter
did not properly adjust target value to nearest V/f point. This
could lead to lower than expected effective frequency. Fixed the
logic to adjust to nearest V/f point.
Bug 200412996
Change-Id: I36c24b4c081931e2ac54da14d49e46fcb14503e3
(cherry picked from commit 7ed1f8fb39f76208922daa91d00905cdb96b2304)
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1763641
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This explicit incr_wfi has not been used since commit 06be77da376f
("gpu: nvgpu: Do not send WFI when finishing channel").
Change-Id: I0213b0f728f83b483a7dbbef252912555b06815f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1765407
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the lock release to cover the g->ops.mm.gmmu_unmap() call
as well since this too must be called under the VM lock.
Bug 2156667
Change-Id: I17d819d1341e211a3d0bd0ecb7cf09884eaca767
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1764598
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gk20a_channel_clean_up_jobs hasn't needed a barrier since commit
d20a501dcbf2 ("gpu: nvgpu: simplify job semaphore release in abort").
Bug 200327089
Change-Id: I64b9e3b7970de232ac553f570b8fd41aec3b7e21
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1764309
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the logic used for selecting frequencies from achievable frequencies
of the GPU clk is selecting one in a set of 8 frequencies.
This reduces the number of available frequencies when the number of
achievable frequencies is small. Change this implementation to choose
all frequencies when the achievable frequency list is small.
Bug 200381453
Change-Id: Ib280d7ccf9b75f88f6c7c6d2666f05e92a0343bd
Signed-off-by: Vishruth <vishruthj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1753289
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nvgpu_mem_get_addr() gets virtual/phys address depending on the platform.
But we need to explicitly use physical addresses to configure PCI simulation
support since simulator expects physical address only
Hence use nvgpu_mem_get_phys_addr() explicitly to configure msg/send/recv
buffers needed for pci simulation support
Jira NVGPUT-41
Change-Id: I6870feef35fe81d43189fa048dc2f7052926bcc4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1756843
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the vm_area free code, when unreffing the mappings owned by
the vm_area, we need to continue holding the VM lock.
Also add a comment specifying this requirement in the VM code.
Bug 2156667
Change-Id: If0b430f045e4c585fcba2d3176163e5b19be8326
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1763235
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The NVGPU_DMA_NO_KERNEL_MAPPING flag is going away, and these functions
are no longer used. Delete them.
Change-Id: I0084d64c92783dd65306871e5cf6bd6366087caf
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761581
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nvgpu_dma_alloc_sys() gives cpu-mapped memory by default. Remove the
explicit calls to map and unmap the sim buffers.
Change-Id: Icf71961c16a8b2f5dae24382cc927c7a802a769a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761580
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GPU page tables are always mapped to the CPU now, so they don't need
the nvgpu_mem_{begin,end}() calls.
Change-Id: Ic48eeed3a6f002c78f89ef07922cddf835337de3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761579
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some graphics context buffers are explicitly cleared to zero after
allocation. That's not necessary because the allocator gives
zero-initialized memory already, so remove the clears.
Change-Id: I8f9913605801e35082762e7743762d97f88e1d12
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761578
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No need to use the flags variant with no flags.
Change-Id: I14effca4e7071e91230dd89c435843d128e022b1
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761577
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that GR buffers always have a kernel mapping, remove the unnecessary
calls to nvgpu_mem_begin() and nvgpu_mem_end() on these buffers:
- global ctx buffer mem in gr
- gr ctx mem in a tsg
- patch ctx mem in a gr ctx
- pm ctx mem in a gr ctx
- ctx_header mem in a channel (subctx header)
Change-Id: Id2a8ad108aef8db8b16dce5bae8003bbcd3b23e4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1760599
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed de-funct GK20A_PERFMON config from Kconfig,
since code related to this functionality removed
long ago and DEVFREQ is getting used for gpu frequency
scaling.
Change-Id: Ic6828117a06f09446b64c789a1475c520ca520f8
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1747883
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check for valid scaling profile private data before making
calls to bandwidth manager.
Bug 200423741
Change-Id: Iff12b4a26ff0dfb2c32248b325a07e97f2de4e98
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1763601
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change mclk scaling factor from 2 to 4, as we use dramdiv4 to
measure effective frequency.
Bug 2212598
Change-Id: I154a74712dbedf315f755ee51f75a06795841a1c
Reviewed-on: https://git-master.nvidia.com/r/1761896
(cherry picked from commit 35e26a5165ac08f783420343262e05e1d6d82996)
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762345
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct fence and fence* APIs have changed to struct dma_fence
and dma_fence* respectively. This patch makes the required changes that
avoids using the struct fence type to prevent making API changes in nvgpu.
Bug 200417423
Change-Id: I566de58a3659cbc2495670136dc2fc65862b46e7
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1754164
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Bitan Biswas <bbiswas@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|