summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: nvgpu: pass pid/tid from os specific code to common codeRichard Zhao2018-04-16
| | | | | | | | | | | | | | | | | | linux driver runs in user's process but qnx driver has dedicate driver process, so they have different way to get user pid. nvgpu common code expect calls from os specific code pass pid/tid. ce/cde open channel for internal use, we use driver pid. Jira VQRM-3534 Change-Id: I892372ac5f1dc4d25f9928d16992bcc659d12a56 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694145 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast listDeepak Nibade2018-04-16
| | | | | | | | | | | | | | | | | | | | | | | | In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs and just calculate the unicast list assuming all FBPAs are present This generates incorrect list of unicast addresses Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr Set gr_gv100_get_active_fpba_mask() for GV100 Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate unicast list only for active FBPAs Bug 200398811 Jira NVGPU-556 Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1694444 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Update clk_fll interface as per chips_aTejal Kudav2018-04-12
| | | | | | | | | | | | | | | | | | | Two new members added to fll struct and code modified to support GV100 VBIOS NAFLL tables Add g->ops for getting vbios clk domains JIRA NVGPUGV100-39 Change-Id: Iaabea893d55d44a272e2bce2b1d525b122cd36f5 Signed-off-by: Tejal Kudav <tkudav@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1594289 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com> Tested-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add conversion function for uapi submit gpfifo flagsSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | The submit gpfifo flags are splattered everywhere inside the nvgpu code. Though the usage is inside nvgpu Linux code only, still it needs to be gotten rid of and replaced with the defines present in common code. VQRM-3465 Change-Id: I901b33565b01fa3e1f9ba6698a323c16547a8d3e Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691979 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove usage of nvgpu_gpfifoSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | | Remove the usage of nvgpu_gpfifo splattered across nvgpu, and replace with a struct defined in common code. The usage is still inside Linux, but this helps the subsequent unification efforts, e.g. to unify the submit path. VQRM-3465 Change-Id: I9e5ac697a0c7f85239ddba319085c09481d20d6b Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691978 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove usage of nvgpu_fenceSourab Gupta2018-04-11
| | | | | | | | | | | | | | | | | | | Remove the usage of nvgpu_fence splattered across nvgpu, and replace with a struct defined in common code. The usage is still inside Linux, but this helps the subsequent unification efforts, e.g. to unify the submit path. VQRM-3465 Change-Id: Ic3737450123dfc5e1c40ca5b6b8d8f6b3070aa0d Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691977 Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix gpc/tpc index for SMPC broadcast conversionDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | In gv11b_gr_egpc_etpc_priv_addr_table(), we call gv11b_gr_update_priv_addr_table_smpc() to convert SMPC broadcast address into list of unicast addresses But before calling gv11b_gr_update_priv_addr_table_smpc() we sometimes incorrectly set gpc_num/tpc_num to zero and that leads to generating incorrect list of unicast addresses Remove this incorrect initialization of gpc_num/tpc_num Also update gv11b_gr_egpc_etpc_priv_addr_table() to receive tpc_num along with gpc_num Bug 2099717 Jira NVGPU-580 Change-Id: Idd4e5f78dbe6ca1800efae93c66355d06417d1f2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1691373 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL for chiplet offsetDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and NV_PMM_FBP_STRIDE which are incorrect for Volta Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta Use HAL instead of hard coded values wherever required Bug 200398811 Jira NVGPU-556 Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690028 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add support to get unicast addresses on voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | We have new broadcast registers on Volta, and we need to generate correct unicast addresses for them so that we can write those registers to context image Add new GR HAL create_priv_addr_table() to do this conversion Set gr_gk20a_create_priv_addr_table() for older chips Set gr_gv11b_create_priv_addr_table() for Volta gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate appriate list of unicast register for each broadcast register Bug 200398811 Jira NVGPU-556 Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690027 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add broadcast address decode support for voltaDeepak Nibade2018-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Volta we have more number of broadcast registers than previous chips and we don't decode them right now in gr_gk20a_decode_priv_addr() Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all previous chips Add and use gr_gv11b_decode_priv_addr() for Volta gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set the broadcast flags apporiately Define below new broadcast types PRI_BROADCAST_FLAGS_PMMGPC PRI_BROADCAST_FLAGS_PMM_GPCS PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB PRI_BROADCAST_FLAGS_PMMFBP PRI_BROADCAST_FLAGS_PMM_FBPS PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP Bug 200398811 Jira NVGPU-556 Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1690026 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: fix build errors on qnxRichard Zhao2018-04-10
| | | | | | | | | | | | | | | | | | - Declare global functions before reaching the implementation. - avoid using current (current process). - assign ch->pid/tgid before using them. Jira VFND-4870 Change-Id: I688a1b89ef4d5dcf046929eab11d7e523caba0a5 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1687142 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* nvgpu: remove gk20a_init_bus functionShashank Singh2018-04-07
| | | | | | | | | | | | | | | | | | | - gk20a_init_bus is not called from nvgpu, better remove it so that qnx can build bus_gk20a.c. QNX otherwise require declaration for non-static functions. Change-Id: I2a6dff951ae0b4ea1193ca05435b5587f8172b1e Signed-off-by: Shashank Singh <shashsingh@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1689261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gk20a: Use ENOSPC instead of ENOMEMmpurohit2018-04-06
| | | | | | | | | | | | | | | | | | | | | | Modify gr_gk20a_add_zbc() to return -ENOSPC instead of -ENOMEM when all slots are already used. ENOMEM is more meant for memory allocation failures, anyway. This is required for porting nvgpu_gpu_zbc test case using libnvrm_gpu APIs. Bug 1967537 JIRA VQRM-3348 Change-Id: Ib7bcb6ba94d2ca168bcad517a8a7260fbf278a91 Signed-off-by: Mahesh Purohit <mpurohit@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1688302 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: De-linuxify pmgr codeAlex Waterman2018-04-05
| | | | | | | | | | | | | | | | | | | | The pmgr code is in theory common code. However there were uses of Linux stuff within this code. This patch cleans that up by deleting the unnecessary os_linux.h includes, usage of kfree() and adds several platform fields to the gk20a struct. The platform data is copied to the gk20a struct in the platform initialization code so that this common code can access said data without requiring any knowledge of the OS platform data. JIRA NVGPU-525 Change-Id: Ic4bb6021f60b0a0778779ab5f3e15b7e5ca98306 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673825 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add usermode submission interface HALSourab Gupta2018-04-05
| | | | | | | | | | | | | | | | | | | | The patch adds the HAL interfaces for handling the usermode submission, particularly allocating channel specific usermode userd. These interfaces are currently implemented only on QNX, and are created accordingly. As and when linux adds the usermode submission support, we can revisit them if any further changes are needed. Change-Id: I790e0ebdfaedcdc5f6bb624652b1af4549b7b062 Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683392 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: pass alloc_gpfifo args to gk20a_channel_alloc_gpfifoSourab Gupta2018-04-05
| | | | | | | | | | | | | | | | The patch defines 'struct nvgpu_gpfifo_args' to be filled by alloc_gpfifo(_ex) ioctls and passed to the gk20a_channel_alloc_gpfifo function. This is required as a prep towards having the usermode submission support in the core channel core. Change-Id: I72acc00cc5558dd3623604da7d716bf849f0152c Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683391 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle misaligned_addr SM exceptionDeepak Nibade2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | We right now do not handle misaligned_addr SM exception explicitly and hence we incorrectly initiate CILP on this exception Handle this exception explicitly in this sequence - - set error notifier first - clear the interrupt - return error from gr_gv11b_handle_warp_esr_error_misaligned_addr() so that RC recovery is triggered by gk20a_gr_isr() Ensure that the error value is propagated back to gk20a_gr_isr() correctly Use nvgpu_set_error_notifier_if_empty() to set error notifier since this will prevent overwriting of error notifier value in case gk20a_gr_isr() also tries to write to some error notifier value Bug 200388475 Jira NVGPU-554 Change-Id: I84c4d202a8068e738567ccd344e05d9d5f6ad2f0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1686781 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use u64 for log maskTerje Bergstrom2018-04-04
| | | | | | | | | | | | | | | BIT() is defined as returning a 64-bit value. We use it to create the log mask values, but the functions that accept log mask take only u32 as parameter. Use u64 as log mask parameter for the logging functions to match the sizes. Change-Id: I6f0803a7d04ee6a2ee725b5defc4cc14b5b7acf5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683818 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Correct sign qualifiers for LTC codeTerje Bergstrom2018-04-03
| | | | | | | | | | | | | In constants we use in LTC code we miss the qualifier indicating if the constant is signed or unsigned. Add qualifiers for LTC code and the ZBC related constant used in LTC code. Change-Id: Id80078722f8a4f50eb53370146437bebb72a3ffc Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683859 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: initialize ctxsw state for golden context creationseshendra Gadagottu2018-04-03
| | | | | | | | | | | | | | | | | | | | If golden context creation happens before any gpu railgate then channel creation is always fine. If gpu railgate happens after gpu finalize poweon, but before golden context creation, then golden context creation is failing during first channel creation with watchdog timeout from ctxsw because of invalid ctxsw state. To Fix this issue, if the golden context is not created, then during finalize power on always query ctxsw image sizes, which is making ctxsw hw in correct state before golden context creation. Bug 2051863 Change-Id: I81d221100a099b12bad3adc2d252de4621c335a5 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682265 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix address table for GPCS_TPC6 broadcast conversionDeepak Nibade2018-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | In gr_gk20a_create_priv_addr_table() and gv11b_gr_egpc_etpc_priv_addr_table(), we create a table of unicast addresses from broadcast addresses For GPC boardcast addresses like NV_PGRAPH_PRI_EGPCS_ETPC6_SM_*, we generate the table assuming there are 7 TPCs in all the GPCs But this is incorrect in some cases like GV100 where GPC0/1 have only 6 TPCs And hence we end up generating registers which do not exist Fix this by explicitly checking the number of TPCs and ensuring that address generated is belongs to valid TPC Bug 200400376 Jira NVGPU-564 Change-Id: I65d7d6cd7f0bf16171eb54ed71f1f3840ade3495 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1686806 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Delete unused regops dataAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | Flaged by CLANG, this data is unused and may now be deleted. JIRA NVGPU-525 Change-Id: Idf232b98aa3dfa6b03d29ec8b38cde58de20d29f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673819 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: WAR unlikely() bug in CLANGAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CLANG, when compiling regops_gk20a.c sees the following warning: ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: error: equality comparison with extraneous parentheses [-Werror,-Wparentheses-equality] if (unlikely(skip_read_lo == false)) { ~~~~~~~~~~~~~^~~~~~~~ ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: note: remove extraneous parentheses around the comparison to silence this warning if (unlikely(skip_read_lo == false)) { ~ ^ ~ ../drivers/gpu/nvgpu/gk20a/regops_gk20a.c:464:30: note: use '=' to turn this equality comparison into an assignment if (unlikely(skip_read_lo == false)) { ^~ = 1 error generated. But this obviously is fine. However, it's simple enough to work around by just deleting the unlikely() call. We don't do anything with that anyway. JIRA NVGPU-525 Change-Id: I674855ad08daf65ac6d79ceab7d4f56f637d4437 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673818 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use proper signage in shift operationAlex Waterman2018-03-30
| | | | | | | | | | | | | | | | This fails with a warning when compiling with CLANG. JIRA NVGPU-525 Change-Id: Ied04e1683d1740d7f946902edc93299d223564fc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673817 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: channel_gk20a.c cleanup for POSIXAlex Waterman2018-03-30
| | | | | | | | | | | | | | Remove a variable which is assigned to but never used. Add proper include (<nvgpu/log2.h>) for ilog2(). JIRA NVGPU-525 Change-Id: I42f3fddad9c294dc64343082e1dbd44b19120089 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673816 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.setup_swSourab Gupta2018-03-29
| | | | | | | | | | | | | bar1/userd setup is different for RM server. created common function gk20a_init_fifo_setup_sw_common. Jira VQRM-3058 Change-Id: I655b54e21ed5f15dcb8e7b01bd9cd129b35ae7a3 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665691 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.set_error_notifierRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server overrides it for handling stall interrupts. Jira VQRM-3058 Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679709 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.channel_suspend/channel_resumeRichard Zhao2018-03-29
| | | | | | | | | | | | RM Server acts differently for channel suspend/resume. Jira VQRM-3058 Change-Id: If41e3099164654db448d1157fd7f51dd00c5e201 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679707 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.check_tsg_ctxsw_timeout/check_ch_ctxsw_timeoutRichard Zhao2018-03-29
| | | | | | | | | | | | | RM Server acts differently for ctxsw timeout check. It won't check GP_GET or accumulated timeouts, but notify guest and go to recovery. Jira VQRM-3058 Change-Id: I428aea34dc517311eb7e73feb556145e916309fb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679706 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gops.fifo.ch_abort_clean_upRichard Zhao2018-03-29
| | | | | | | | | | | | | | Channel abort clean up is only needed by native and vgpu driver but not RM server. RM server expects guest will clean up itself. RM server should not set the callback. Jira VQRM-3058 Change-Id: I11b49b6f2d51c871e31de16955d487dca82609cb Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1679705 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: make fifo/ch functions called by RM Server globalSourab Gupta2018-03-29
| | | | | | | | | | | | | | | | | | The patch declares globally few channel/fifo HAL functions required for QNX code compilation (as they are being referred elsewhere in QNX code). This is required as a part of bringing in the nvgpu Channel/FIFO HAL into QNX. Jira VQRM-3058 Change-Id: Ia176535b64de981d2f7ddb20f62015a0da74fd2a Signed-off-by: Sourab Gupta <sourabg@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1662411 GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: enhance pbus error reportingSeema Khowala2018-03-29
| | | | | | | | | | | | | | | | -Dump timeout save0 and save1 even if they could be unreliable when fecs_tgt in set in save0 . This is good to have for debug purposes. -Add priv_ring hal for decode_error_code -Decode fecs error code for supported error types Bug 1998067 Change-Id: I60cb6902d099df4a7df45fa624e44d9e0d46360f Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1683014 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use gpc_tpc_count[gpc] for number of tpc in a gpcSeema Khowala2018-03-29
| | | | | | | | | | | | | | | | | | | | | Using tpc_count instead of gpc_tpc_count indexed by gpc, will result in pbus error with decode error or client floorswept error codes. tpc_count represents total number of tpc while gpc_tpc_count[gpc] represents number of tpc in the indexed gpc. Bug 1998067 Change-Id: I9adfb98a6c3e209cbb02a8cd5090f6b6adc1ec4b Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682469 Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: fix priv error register readsThomas Fleury2018-03-28
| | | | | | | | | | | | | | | | | | | | | | | | Current code does not compute priv error register offsets properly. This leads to invalid decoding of priv errors, and can also trigger additional priv errors. - add GPU_LIT_GPC_PRIV_STRIDE define - return proj_gpc_priv_stride for GPU_LIT_GPC_PRIV_STRIDE in hals - use GPU_LIT_GPC_PRIV_STRIDE instead of GPU_LIT_GPC_STRIDE in g->ops.priv_ring.isr() to compute priv error register offsets. Bug 2093058 Change-Id: Ia7c36ccba0441126784bb0e00452f2cf1196ef71 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1682118 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Seema Khowala <seemaj@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Seema Khowala <seemaj@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: simplify job semaphore release in abortKonsta Holtta2018-03-28
| | | | | | | | | | | | | | | | | | | Instead of looping all jobs and releasing their semaphores separately, do just one semaphore release. All the jobs are using the same sema index, and the final, maximum value of it is known. Move also this resetting into ch->sync->set_min_eq_max() to be consistent with syncpoints. Change-Id: I03601aae67db0a65750c8df6b43387c042d383bd Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680362 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete semaphore release supportKonsta Holtta2018-03-27
| | | | | | | | | | | | | | | | | | | | | | | | Semaphores don't need to be released from CPU anymore, so clarify the code by deleting nvgpu_semaphore_release() and refactoring __nvgpu_semaphore_release() to nvgpu_semaphore_reset() that only "fast-forwards" the semaphore to a later value. While doing this, the meaning of nvgpu_semaphore_incr() changes, so rename it to nvgpu_semaphore_prepare(). Now it's only used to prepare an nvgpu_semaphore for a value that the HW will increment the sema to. Also change the BUG_ON that guards sema double-inits into just WARN_ON. Change-Id: I6f6df368ec5436cc97a229697742b6a4115dca51 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680361 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Reset streaming on perfbuf_enable and perfbuf_disableMartin Radev2018-03-26
| | | | | | | | | | | | | | | | | Similarly to css_hw_(enable|disable)_snapshot the HWPM state should be reset on perfbuf_enable and perfbuf_disable to avoid leaking snapshot data into a freshly mapped buffer. Bug 1960846 Change-Id: I94826b209ef4b8cb6ad44d3b8667745270c6a7e1 Signed-off-by: Martin Radev <mradev@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: disallow invalid syncpoint wait idsKonsta Holtta2018-03-23
| | | | | | | | | | | Instead of ignoring a wait when a raw syncpoint prefence has an invalid id, reject the submit with -EINVAL just like with syncpoints in syncfds. Change-Id: I9b5c417bd1c7cd081c79659d088ac2c915de8c0e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680281 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: allow syncfds as prefences on deterministicKonsta Holtta2018-03-23
| | | | | | | | | | | | | | | | | | | | | | Accept submits on deterministic channels even when the prefence is a syncfd, but only if it has just one fence inside. Because NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE is shared between pre- and postfences, a postfence (SUBMIT_GPFIFO_FLAGS_FENCE_GET) is not allowed at the same time though. The sync framework is problematic for deterministic channels due to certain allocations that are not controlled by nvgpu. However, that only applies for postfences, yet we've disallowed FLAGS_SYNC_FENCE for deterministic channels even when a postfence is not needed. Bug 200390539 Change-Id: I099bbadc11cc2f093fb2c585f3bd909143238d57 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1680271 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: set safe state for user managed syncpointsDeepak Nibade2018-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | MAX/threshold value of user managed syncpoint is not tracked by nvgpu So if channel is reset by nvgpu there could be waiters still waiting on some user syncpoint fence Fix this by setting a large safe value to user managed syncpoint when aborting the channel and when closing the channel We right now increment the current value by 0x10000 which should be sufficient to release any pending waiter Bug 200326065 Jira NVGPU-179 Change-Id: Ie6432369bb4c21bd922c14b8d5a74c1477116f0b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1678768 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix PMA list alignment in ctxsw bufferDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in ctxsw buffer to be 256 byte aligned but same change is not applied to other chip ucodes ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other chips except GV100 Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix the required alignment in this function Bug 1998067 Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676655 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entriesDeepak Nibade2018-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LIST_nv_pm_fbpa_ctx_regs, we right now call add_ctxsw_buffer_map_entries_subunits() to add registers corresponding to all the FBPAs But while configuring total number of registers, we do not consider floorswept FBPAs and that causes misalignment in subsequent lists for GV100 Fix this by reading disabled/floorswept FBPAs from fuse and consider only those FBPAs which are active for GV100 Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100 Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned implementation to consider floorsweeping Bug 1998067 Change-Id: Id560551bb0b8142791c117b6d27864566c90b489 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676654 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete unused job->pre_fenceKonsta Holtta2018-03-19
| | | | | | | | | | | | | | | | | | | The pre_fence member in channel_gk20a_job is no longer used for anything. Delete it. Only the post fence needs to be tracked. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: Ia1a556728dabf9a8e305ed76020ac1aa0b4d6b88 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676735 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove fence param from channel_syncKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | The fence parameter that gets output from gk20a_channel_sync's wait() and wait_fd() APIs is no longer used for anything. Delete it. Jira NVGPU-527 Jira NVGPU-528 Bug 200390539 Change-Id: I659504062dc6aee83a0a0d9f5625372b4ae8c0e2 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1676734 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove support for foreign sema syncfdsKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | Delete the proxy waiter for non-semaphore-backed syncfds in sema wait path to simplify code, to remove dependencies to the sync framework (and thus Linux) and to support upcoming refactorings. This feature has never been used for actually foreign fences. Jira NVGPU-43 Jira NVGPU-66 Change-Id: I2b539aefd2d096a7bf5f40e61d48de7a9b3dccae Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665119 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use asid only under CONFIG_SYNC in channel_sync_gk20a.cAlex Waterman2018-03-16
| | | | | | | | | | | | | | This variable is only ever used under the CONFIG_SYNC config so make sure that we only define/assign to it when CONFIG_SYNC is enabled. JIRA NVGPU-525 Change-Id: I27160adbd6a46f58e21f24ab19d37966ded5e7de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1673812 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add gpu_va to update_hwpm_ctxsw_mode parameters()Aparna Das2018-03-16
| | | | | | | | | | | | | | | | | It'll allow the function to use fixed mapping. Jira VQRM-2982 Change-Id: I98159c5b199ce1854b1b40704392237cadb71ef2 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1660225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: wait for all prefence semas on gpuKonsta Holtta2018-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pre-fence wait for semaphores in the submit path has supported a fast path for fences that have only one underlying semaphore. The fast path just inserts the wait on this sema to the pushbuffer directly. For other fences, the path has been using a CPU wait indirection, signaling another semaphore when we get the CPU-side callback. Instead of only supporting prefences with one sema, unroll all the individual semaphores and insert waits for each to a pushbuffer, like we've already been doing with syncpoints. Now all sema-backed syncs get the fast path. This simplifies the logic and makes it more explicit that only foreign fences need the CPU wait. There is no need to hold references to the sync fence or the semas inside: this submitted job only needs the global read-only sema mapping that is guaranteed to stay alive while the VM of this channel stays alive, and the job does not outlive this channel. Jira NVGPU-43 Jira NVGPU-66 Jira NVGPU-513 Change-Id: I7cfbb510001d998a864aed8d6afd1582b9adb80d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1636345 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv10x volt rail boardobj changesMahantesh Kumbar2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Created volt ops under pmu_ver to support volt_set_voltage, volt_get_voltage & volt_send_load_cmd_to_pmu. - Renamed volt load, set_voltage & get_voltage gp10x method names. - Added new volt load, set_voltage & get_voltage methods for gv10x using RPC & added code to handle ack in pmu_rpc_handler() along with struct rail_list changes. - Updated volt ops of gp106 & gv100 to point to respective methods. - Added member volt_dev_idx_ipc_vmin & volt_scale_exp_pwr_equ_idx to "struct nv_pmu_volt_volt_rail_boardobj_set" & "struct voltage_rail" made changes to update members as needed. - Added member volt_scale_exp_pwr_equ_idx to "struct vbios_voltage_rail_table_1x_entry" to read value from VBIOS table & update rail boardobj set interface. - Defines for volt RPC "NV_PMU_RPC_ID_VOLT_*" - Define struct's volt load, set_voltage & get_voltage to execute volt RPC. Change-Id: I4a41adcf7536468beaa8a73f551b1d608aabd161 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1659728 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Updated RPC to support copyback & callbackMahantesh Kumbar2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Updated & added new parameter "bool is_copy_back" to nvgpu_pmu_rpc_execute() to support copy back processed RPC request from PMU to caller by passing parameter value true & this blocks method till it receives ACK from PMU for requested RPC. - Added "struct rpc_handler_payload" to hold info required for RPC handler like RPC buff address & clear memory if copy back is not requested. - Added define PMU_RPC_EXECUTE_CPB to support to copy back processed RPC request from PMU to caller. - Updated RPC callback handler support, crated memory & assigned default handler if callback is not requested else use callback parameters data to request to PMU. - Added define PMU_RPC_EXECUTE_CB to support callback - Updated pmu_wait_message_cond(), restricted condition check to 8-bit instead 32-bit condition check. Change-Id: Ic05289b074954979fd0102daf5ab806bf1f07b62 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1664962 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>