summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Add device_info_data supportLakshmanan M2016-05-27
| | | | | | | | | | | | | Added device_info_data parsing support for maxwell GPU series. JIRA DNVGPU-26 Change-Id: I06dbec6056d4c26501e607c2c3d67ef468d206f4 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1151602 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move & rename acr_gm20b to acr_descMahantesh Kumbar2016-05-26
| | | | | | | | | | | | | acr_gm20b renamed to acr_desc to support multiple gpu chips JIRA DNVGPU-10 Change-Id: Ib3b38d5845043f026ddc365a682b7bb454463326 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1152401 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: secure boot HAL updateMahantesh Kumbar2016-05-26
| | | | | | | | | | | | | Updated/added secure boot HAL with methods required to support multiple GPU chips. JIRA DNVGPU-10 Change-Id: I343b289f2236fd6a6b0ecf9115367ce19990e7d5 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1151784 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move PCI devnodes to own directoryTerje Bergstrom2016-05-25
| | | | | | | | | | | | | | | | | | To be able to scan, PCI devnodes need to be in a directory with read permission. By default /dev is read protected by SELinux policy. Move the devnodes to their own directory so that reading this one directory can be allowed. At the same time rename the nodes to start with string "card-". JIRA DNVGPU-54 Change-Id: I0df4ced08afd1f3a468e983d07395ffcb8050365 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1152745 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add support for gm204 and gm206Terje Bergstrom2016-05-23
| | | | | | | | | | | Add support for chips gm204 and gm206. Adds also support for reading VBIOS and booting devinit and pre-os images on PMU. Change-Id: I4824b44245611e5379ace62793cc37158048f432 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120467 GVS: Gerrit_Virtual_Submit Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: update trace for sched paramsThomas Fleury2016-05-21
| | | | | | | | | | | | | | Use an inline function instead of a macro to "expand" all channel parameters. Jira EVLR-244 Jira EVLR-318 Change-Id: I4e8c5ee6bc9da36564af171be809f50dd2dfd439 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1150050 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add HAL op for PMU resetTerje Bergstrom2016-05-20
| | | | | | | | | Sequence to reset PMU is different for iGPU and dGPU. Specialize and implement iGPU version. Change-Id: I5b9ff2c018a736bc9e27b90d0942c52706b12a12 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1150540
* gpu: nvgpu: hwpm broadcast register supportPeter Daifuku2016-05-19
| | | | | | | | | | | | | | | | | | | | | | | Add support for hwpm broadcast registers (ltc and lts) In gr_gk20a_find_priv_offset_in_buffer, replace "Unknown address type" error with informational message: gr_gk20a_exec_ctx_ops calls gk20a_get_ctx_buffer_offsets and if that fails, calls gr_gk20a_get_pm_ctx_buffer_offsets; HWPM registers will fail the first call, so an error or warning is overkill. Bug 1648200 Change-Id: I197b82579e9894652add4ff254418f818981415a Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1131365 (cherry picked from commit 9f30a92c5d87f6dadd34cc37396a6b10e3a72751) Reviewed-on: http://git-master/r/1133628 (cherry picked from commit 7eb7cfd998852ba7f7c4c40d3db286f66e83ab3a) Reviewed-on: http://git-master/r/1127749 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Read all fields of device_infoTerje Bergstrom2016-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | We were not using the engine_type field in device info, and the code did not handle chained entries properly. The code assumed that first entry is for graphics and second for CE, which is not always true. Improve the code to go through all entries of device_info, and preserve values across entries until we reach the last entry. Only last entry triggers a write to fifo engine info. There can also be multiple engines with same type, so accumulate interrupts and reset ids from all of them. As the code got fixed, now it reads the engine enum correctly from hardware. We used to compare that against CE0, but we should compare against CE2. gk20a_fifo_reset_engine() uses wrong constants - it is passed a internal numbering of engines, but it compares them against hardware engine enum. Change-Id: Ia59273921c602d2a090f7a5b1404afb0fca2532c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1147746 Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Fix CWD floorsweep programmingTerje Bergstrom2016-05-16
| | | | | | | | | | | | | Program CWD TPC and SM registers correctly. The old code did not work when there are more than 4 TPCs. Refactor init_fs_mask to reduce code duplication. Change-Id: Id93c1f8df24f1b7ee60314c3204e288b91951a88 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1143697 GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Do not program max ways evictTerje Bergstrom2016-05-13
| | | | | | | | | | | Setting max_ways_evict reserves some of L2 for CB. In gk20a CB is in dedicated RAM, so we don't need to reserve space for it. The code gets invoked only on gk20a. Change-Id: Ib8efec8c5e90c135bd0c10bb1eaa3f797ec68698 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1144993
* gpu: nvgpu: refactor gk20a_mem_{wr,rd} for vidmemKonsta Holtta2016-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support vidmem, pass g and mem_desc to the buffer memory accessor functions. This allows the functions to select the memory access method based on the buffer aperture instead of using the cpu pointer directly (like until now). The selection and aperture support will be in another patch; this patch only refactors these accessors, but keeps the underlying functionality as-is. gk20a_mem_{rd,wr}32() work as previously; add also gk20a_mem_{rd,wr}() for byte-indexed accesses, gk20a_mem_{rd,wr}_n() for memcpy()-like functionality, and gk20a_memset() for filling buffers with a constant. The 8 and 16 bit accessor functions are removed. vmap()/vunmap() pairs are abstracted to gk20a_mem_{begin,end}() to support other types of mappings or conditions where mapping the buffer is unnecessary or different. Several function arguments that would access these buffers are also changed to take a mem_desc instead of a plain cpu pointer. Some relevant occasions are changed to use the accessor functions instead of cpu pointers without them (e.g., memcpying to and from), but the majority of direct accesses will be adjusted later, when the buffers are moved to support vidmem. JIRA DNVGPU-23 Change-Id: I3dd22e14290c4ab742d42e2dd327ebeb5cd3f25a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1121143 Reviewed-by: Ken Adams <kadams@nvidia.com> Tested-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: add fuse overrides for tpc disablingRichard Zhao2016-05-10
| | | | | | | | | | | | | | | | | - add fuse_override in gops. Implement it starting from gm20b. - set cwd fs register, so cuda won't use disabled TPCs Bug 1757262 Bug 200169697 Change-Id: If7bac58bd3a6bcf2925197ea5b7c2d10a77e0933 Signed-off-by: Richard Zhao <rizhao@nvidia.com> (cherry picked from commit 66cde7724815e9e5e85ab9b07fc985a78530222f) Reviewed-on: http://git-master/r/1132177 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Adeel Raza <araza@nvidia.com> Tested-by: Adeel Raza <araza@nvidia.com>
* gpu: nvgpu: add supported preemptions to gpu characteristicsDeepak Nibade2016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | Add below flag fields to gpu characteristics to indicate supported and default preemption modes on platform for graphics and compute __u32 graphics_preemption_mode_flags; __u32 compute_preemption_mode_flags; __u32 default_graphics_preempt_mode; __u32 default_compute_preempt_mode; Add struct nvgpu_preemption_modes_rec to struct gr_gk20a to store these values locally Use platform specific get_preemption_mode_flags() to get the flags and define gk20a/gm20b specific get_preemption_mode_flags() API Bug 1646259 Change-Id: I80193c0d988dc93bd96585f9aa631fd817f4dfa3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1133595 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: separate IOCTL to set preemption modeDeepak Nibade2016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add separate IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE to allow setting preemption modes from UMD Define preemption modes in nvgpu.h and use them everywhere Remove mode definitions from mm_gk20a.h Also, we support setting only one preemption mode in a channel But it is possible to have multiple preemption modes (one from graphics and one from compute) set simultaneously Hence, update struct gr_ctx_desc to include two separate preemption modes (graphics_preempt_mode and compute_preempt_mode) API NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE also supports setting two separate preemption modes i.e. one for graphics and one for compute Make necessary changes in code to support two preemption modes Bug 1646259 Change-Id: Ia1dea19e609ba8cc0de2f39ab6c0c4cd6b0a752c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1131805 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* nvgpu: vgpu: create fifo.force_reset_ch in gpu_opsHaley Teng2016-05-09
| | | | | | | | | | | | | | gk20a_fifo_force_reset_ch() does not support vgpu now, so we need to create a function pointer in gpu_ops and assign it differently for vgpu and non-vgpu. Bug 200184349 Change-Id: I5f8f4f731b4b970c4ff8de65531f25568e7691b6 Signed-off-by: Haley Teng <hteng@nvidia.com> Reviewed-on: http://git-master/r/1130420 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Add PCIe device supportTerje Bergstrom2016-04-27
| | | | | | | | | | Add support for probing PCIe graphics cards. JIRA DNVGPU-7 Change-Id: Iad3d31a1dc0ca6575d8a9916857022cac9181948 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1127684
* gpu: nvgpu: Idle GR before calling PMU ZBC saveTerje Bergstrom2016-04-26
| | | | | | | | | | | | | | | On gk20a when PMU is updating ZBC colors it is reading them from L2. But L2 has one port, and ZBC reads can race with other transactions. Idle graphics before sending PMU the ZBC_UPDATE request. Also makes pmu_save_zbc a HAL, because PMU ucode has changes to bypass this problem on some chips. Bug 1746047 Change-Id: Id8fcd6850af7ef1d8f0a6aafa0fe6b4f88b5f2d9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1129017
* gpu: nvgpu: IOCTL to suspend/resume contextDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add below IOCTL to suspend/resume a context NVGPU_DBG_GPU_IOCTL_SUSPEND_RESUME_CONTEXTS: Suspend sequence : - disable ctxsw - loop through list of channels - if channel is ctx resident, suspend all SMs - otherwise, disable channel/TSG - enable ctxsw Resume sequence : - disable ctxsw - loop through list of channels - if channel is ctx resident, resume all SMs - otherwise, enable channel/TSG - enable ctxsw Bug 200156699 Change-Id: Iacf1bf7877b67ddf87cc6891c37c758a4644b014 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120332 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support binding multiple channels to a debug sessionDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently bind only one channel to a debug session But some use cases might need multiple channels bound to same debug session Add this support by adding a list of channels to debug session. List structure is implemented as struct dbg_session_channel_data List node dbg_s_list_node is currently defined in struct dbg_session_gk20a. But this is inefficient when we need to add debug session to multiple channels Hence add new reference structure dbg_session_data to store dbg_session pointer and list entry For each NVGPU_DBG_GPU_IOCTL_BIND_CHANNEL call, create two reference structure dbg_session_channel_data for channel and dbg_session_data for debug session and bind them together Define API nvgpu_dbg_gpu_get_session_channel() which will get first channel in the list of debug session Use this API wherever we refer to channel bound to debug session Remove dbg_sessions define in struct gk20a since it is not being used anywhere Add new API NVGPU_DBG_GPU_IOCTL_UNBIND_CHANNEL to support unbinding of channel from debug sesssion Bug 200156699 Change-Id: I3bfa6f9cd5b90e7254a75c7e64ac893739776b7f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120331 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu; nvgpu: IOCTL to write/clear SM error statesDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | Add below IOCTLs to write/clear SM error states NVGPU_DBG_GPU_IOCTL_CLEAR_SINGLE_SM_ERROR_STATE NVGPU_DBG_GPU_IOCTL_WRITE_SINGLE_SM_ERROR_STATE Bug 200156699 Change-Id: I89e3ec51c33b8e131a67d28807d5acf57b3a48fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120330 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support storing/reading single SM error stateDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to store error state of single SM before preprocessing SM exception Error state is stored as : struct nvgpu_dbg_gpu_sm_error_state_record { u32 hww_global_esr; u32 hww_warp_esr; u64 hww_warp_esr_pc; u32 hww_global_esr_report_mask; u32 hww_warp_esr_report_mask; } Note that we can safely append new fields to above structure in the future if required Also, add IOCTL NVGPU_DBG_GPU_IOCTL_READ_SINGLE_SM_ERROR_STATE to support reading SM's error state by user space Bug 200156699 Change-Id: I9a62cb01e8a35c720b52d5d202986347706c7308 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120329 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Wait for BAR1 bindTerje Bergstrom2016-04-15
| | | | | | | | | Wait for BAR1 bind to complete before continuing. The register to wait exists Maxwell onwards. Change-Id: Ie3736033fdb748c5da8d7a6085ad6d63acaf41f5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1123941
* gpu: nvgpu: Add litter values HALTerje Bergstrom2016-04-15
| | | | | | | | | Move per-chip constants to be returned by a chip specific function. Implement get_litter_value() for each chip. Change-Id: I2a2730fce14010924d2507f6fa15cc2ea0795113 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1121383
* gpu: nvgpu: make tpc_fs_mask work on production boardRichard Zhao2016-04-14
| | | | | | | | | | | | | | | On production fused boards, it uses gr_fe_tpc_fs_r() to mask TPCs, rather than fues. Bug 1734150 Change-Id: I7b4eb428f1ad0cf841a57214e0c8c1e8f17b2c5a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1111630 (cherry picked from commit 869ea54967812e03d9f1e69775ca56fd6459216c) Reviewed-on: http://git-master/r/1122121 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* debugfs: Pass bool pointer to debugfs_create_bool()Alex Van Brunt2016-04-14
| | | | | | | | | | | | | | Port the change 621a5f7ad9cd1ce7933f1d302067cbd58354173c from kernel.org to the nvgpu driver. bug 200187033 Change-Id: I7d742f614161d9d4ed59c4216d7c730d57ef4116 Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1118397 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Support GPUs with only one interrupt lineTerje Bergstrom2016-04-13
| | | | | | | | | | | Not all GPUs have stalling and non-stalling interrupt. Support ones with just one interrupt line. Change-Id: I0f1e8faa5b353b8d1b10691375bd853152379a3a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120470 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: vgpu: add fecs trace supportRichard Zhao2016-04-11
| | | | | | | | | | | | | | Bug 1648908 Change-Id: I7901e7bce5f7aa124a188101dd0736241d87bd53 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1031861 Reviewed-on: http://git-master/r/1121261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: virtualized SMPC/HWPM ctx switchPeter Daifuku2016-04-08
| | | | | | | | | | | | | | Add support for SMPC and HWPM context switching when virtualized Bug 1648200 JIRASW EVLR-219 JIRASW EVLR-253 Change-Id: I80a1613eaad87d8510f00d9aef001400d642ecdf Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1122034 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use device instead of platform_deviceTerje Bergstrom2016-04-08
| | | | | | | | | Use struct device instead of struct platform_device wherever possible. This allows adding other bus types later. Change-Id: I1657287a68d85a542cdbdd8a00d1902c3d6e00ed Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120466
* gpu: nvgpu: Add Fuse prints on PMU HaltSupriya2016-04-06
| | | | | | | | | | | | | | | | -Print fuse values in case of PMU halt error -and mailbox reads 0xDEADDEAD Bug 1737044 Change-Id: I59f5fcf4a69bdd2a2eea81a69dd99bb9c4c21e1d Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/1113464 (cherry picked from commit d0320eed72c5070c4fcc7564c02fa38599984751) Reviewed-on: http://git-master/r/1120429 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add HAL for GPU characteristicsSami Kiminki2016-04-06
| | | | | | | | | | | | | | | | Add function pointer for chip specific GPU characteristics init. Bug 1637486 Change-Id: I6ce5eea124d8057393dec6e86e72412cc87e1cfa Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Signed-off-by: Adeel Raza <araza@nvidia.com> Reviewed-on: http://git-master/r/780535 (cherry picked from commit f5c240d6ed19b5b9eedff05767c885ad5812c71e) Reviewed-on: http://git-master/r/1120428 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: split address space for fixed allocsAlex Waterman2016-03-25
| | | | | | | | | | | | | | | | | | | | | | Allow a special address space node to be split out from the user adress space or fixed allocations. A debugfs node, /d/<gpu>/separate_fixed_allocs Controls this feature. To enable it: # echo <SPLIT_ADDR> > /d/<gpu>/separate_fixed_allocs Where <SPLIT_ADDR> is the address to do the split on in the GVA address range. This will cause the split to be made in all subsequent address space ranges that get created until it is turned off. To turn this off just echo 0x0 into the same debugfs node. Change-Id: I21a3f051c635a90a6bfa8deae53a54db400876f9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1030303 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add support for FECS ctxsw tracingAnton Vorontsov2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bug 1648908 This commit adds support for FECS ctxsw tracing. Code is compiled conditionnaly under CONFIG_GK20_CTXSW_TRACE. This feature requires an updated FECS ucode that writes one record to a ring buffer on each context switch. On RM/Kernel side, the GPU driver reads records from the master ring buffer and generates trace entries into a user-facing VM ring buffer. For each record in the master ring buffer, RM/Kernel has to retrieve the vmid+pid of the user process that submitted related work. Features currently implemented: - master ring buffer allocation - debugfs to dump master ring buffer - FECS record per context switch (with both current and new contexts) - dedicated device for ctxsw tracing (access to VM ring buffer) - SOF generation (and access to PTIMER) - VM ring buffer allocation, and reconfiguration - enable/disable tracing at user level - event-based trace filtering - context_ptr to vmid+pid mapping - read system call for ctxsw dev - mmap system call for ctxsw dev (direct access to VM ring buffer) - poll system call for ctxsw dev - save/restore register on ELPG/CG6 - separate user ring from FECS ring handling Features requiring ucode changes: - enable/disable tracing at FECS level - actual busy time on engine (bug 1642354) - master ring buffer threshold interrupt (P1) - API for GPU to CPU timestamp conversion (P1) - vmid/pid/uid based filtering (P1) Change-Id: I8e39c648221ee0fa09d5df8524b03dca83fe24f3 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1022737 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add support to set channel timesliceAingara Paramakuru2016-03-22
| | | | | | | | | | | | | | | As part of improving GPU scheduling, userspace can now set a channel's timeslice, within reasonable limits imposed by the kernel driver. JIRA VFND-1312 Bug 1729664 Change-Id: I4c3430c43437889b8685f12988d4b967bb7877bb Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1020917 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: improve channel interleave supportAingara Paramakuru2016-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, only "high" priority bare channels were interleaved between all other bare channels and TSGs. This patch decouples priority from interleaving and introduces 3 levels for interleaving a bare channel or TSG: high, medium, and low. The levels define the number of times a channel or TSG will appear on a runlist (see nvgpu.h for details). By default, all bare channels and TSGs are set to interleave level low. Userspace can then request the interleave level to be increased via the CHANNEL_SET_RUNLIST_INTERLEAVE ioctl (TSG-specific ioctl will be added later). As timeslice settings will soon be coming from userspace, the default timeslice for "high" priority channels has been restored. JIRA VFND-1302 Bug 1729664 Change-Id: I178bc1cecda23f5002fec6d791e6dcaedfa05c0c Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-on: http://git-master/r/1014962 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: update slcg/blcg prod setiingsSeshendra Gadagottu2016-03-14
| | | | | | | | | | | | | | | | | | | Add following missing prod settings: blcg bus blcg ce slcg ce2 slcg chiplet slcg gr Bug 1689806 Change-Id: Ic7c9afdb1fc47ad71ca326384f5d2a4528121abe Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/1030987 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: LRF, TEX, LTC, DRAM overrideSupriya2016-02-26
| | | | | | | | | | | | - Adding support for FECS mem overrides Bug 1699676 Change-Id: I6c9ddcd98d57b29059513ee508c6f92b194c4fc7 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/921253 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add create_gr_sysfs() function pointerAdeel Raza2016-02-19
| | | | | | | | | | | | | Add create_gr_sysfs() function pointer for creating gr specific sysfs nodes. Bug 1699676 Change-Id: I0a14d3676ebfcd5adebce673e46bdaad8d6aecf7 Signed-off-by: Adeel Raza <araza@nvidia.com> Reviewed-on: http://git-master/r/1008658 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: SM/TEX exception handling supportAdeel Raza2016-01-29
| | | | | | | | | | | | | Add TEX exception handling support. Also make SM exception handler into a function pointer, which should allow different chips to implement their own SM exception handling routine. Bug 1635727 Bug 1637486 Change-Id: I429905726c1840c11e83780843d82729495dc6a5 Signed-off-by: Adeel Raza <araza@nvidia.com> Reviewed-on: http://git-master/r/935329
* gpu: nvgpu: fix race condition with poweroffSeshendra Gadagottu2016-01-29
| | | | | | | | | | | | | | | | | When gpu rail-gating is enabled, it is possible that both rail gating code and system shudown can start executing gk20a_pm_prepare_poweroff() in parallel. To synchronize this execution, protect gk20a_pm_prepare_poweroff() with a mutex lock. Bug 200168805 Change-Id: I19536a43ed20c3e82b32c316922dc3e19e3f59bb Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/999548 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add support for therm gate ctrlSeshendra Gadagottu2016-01-27
| | | | | | | | | | | | | During gpu init, therm gate control is required to add delay cycles before clock gating. Bug 1717152 Change-Id: Ifabc428cf7b49e49964dc994eba2c38af4aa1a91 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/936443 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add channel_set_priority supportRichard Zhao2016-01-25
| | | | | | | | | | | | | | | - add gops.fifo.channel_set_priority and move current code as native callback. - implement the callback for vgpu Bug 1701079 Change-Id: If1cd13ea4478d11d578da2f682598e0c4522bcaf Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/932829 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support masking hww_warp_esrDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | | | | Add below API pointer to support masking of hww_warp_esr after hardware read of register and before using it further u32 (*mask_hww_warp_esr)(u32 hww_warp_esr) If needed, this API will mask value of hww_warp_esr appropriately and return it Bug 200156699 Change-Id: I1afb1347e650fab607009c1ee55691484653a4c1 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/927133 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support preprocessing of SM exceptionsDeepak Nibade2016-01-13
| | | | | | | | | | | | | | | | Support preprocessing of SM exceptions if API pointer pre_process_sm_exception() is defined Also, expose some common APIs Bug 200156699 Change-Id: I1303642c1c4403c520b62efb6fd83e95eaeb519b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/925883 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add high priority channel interleavePeter Pipkorn2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interleave all high priority channels between all other channels. This reduces the latency for high priority work when there are a lot of lower priority work present, imposing an upper bound on the latency. Change the default high priority timeslice from 5.2ms to 3.0 in the process, to prevent long running high priority apps from hogging the GPU too much. Introduce a new debugfs node to enable/disable high priority channel interleaving. It is currently enabled by default. Adds new runlist length max register, used for allocating suitable sized runlist. Limit the number of interleaved channels to 32. This change reduces the maximum time a lower priority job is running (one timeslice) before we check that high priority jobs are running. Tested with gles2_context_priority (still passes) Basic sanity testing is done with graphics_submit (one app is high priority) Also more functional testing using lots of parallel runs with: NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 –finish plus multiple: NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw –drawsperframe 20000 –triangles 50 –runtime 30 -finish Previous to this change, the relative performance between high priority work and normal priority work comes down to timeslice value. This means that when there are many low priority channels, the high priority work will still drop quite a lot. But with this change, the high priority work will roughly get about half the entire GPU time, meaning that after the initial lower performance, it is less likely to get lower in performance due to more apps running on the system. This change makes a large step towards real priority levels. It is not perfect and there are no guarantees on anything, but it is a step forwards without any additional CPU overhead or other complications. It will also serve as a baseline to judge other algorithms against. Support for priorities with TSG is future work. Support for interleave mid + high priority channels, instead of just high, is also future work. Bug 1419900 Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98 Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com> Reviewed-on: http://git-master/r/805961 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: abstract set sm debug modeRichard Zhao2016-01-10
| | | | | | | | | | | | | | | | | | | | Add new operation g->ops.gr.set_sm_debug_mode and move native implementation to gr_gk20a.c It's preparing for adding vgpu set sm debug mode hook. JIRA VFND-1006 Bug 1594604 Change-Id: Ia5ca06a86085a690e70bfa9c62f57ec3830ea933 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/923232 (cherry picked from commit 032552b54c570952d1e36c08191e9f70b9c59447) Reviewed-on: http://git-master/r/835614 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: Control comptagline assignment from kernelTerje Bergstrom2016-01-05
| | | | | | | | | | | | | | | | | | | | | On Maxwell comptaglines are assigned per 128k, but preferred big page size for graphics is 64k. Bit 16 of GPU VA is used for determining which half of comptagline is used. This creates problems if user space wants to map a page multiple times and to arbitrary GPU VA. In one mapping the page might be mapped to lower half of 128k comptagline, and in another mapping the page might be mapped to upper half. Turn on mode where MSB of comptagline in PTE is used instead of bit 16 for determining the comptagline lower/upper half selection. Bug 1704834 Change-Id: If87e8f6ac0fc9c5624e80fa1ba2ceeb02781355b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/924322 Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: Make access map chip specificTerje Bergstrom2015-12-07
| | | | | | | | Bug 1692373 Change-Id: Ie3fc3e02fa7b0636da464d6ee1c28da7a4543ec2 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/812353