summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a/gk20a.h
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Debugfs support for Railgating stats.Deepak Goyal2016-07-18
| | | | | | | | | | | | | | | | | | This patch calculates: -Total time spent by GPU with rails gated. -Total time spent by GPU with rails ungated. -Total Railgating Cycles. and dumps this information in debugfs file. This feature requires CONFIG_DEBUG_FS set to true. Bug 200195100 Change-Id: I1379f11237ce4900076947e18524caaa3304c7cb Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: http://git-master/r/1178308 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: simplify power managementDeepak Nibade2016-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currenlty initialize both runtime PM and pm_domains frameworks and use pm_domain to control runtime power management of NvGPU But since GPU has a separate rail, using pm_domain is not strictly required Hence remove pm_domain support and use runtime PM only for all the power management This also simplifies the code a lot Initialization in gk20a_pm_init() - if railgate_delay is set, set autosuspend delay of runtime PM - try enabling runtime PM - if runtime PM is now enabled, keep GPU railgated - if runtime PM is not enabled, keep GPU unrailgated - if can_railgate = false, disable runtime PM and keep GPU unrailgated Set gk20a_pm_ops with below callbacks for runtime PM static const struct dev_pm_ops gk20a_pm_ops = { .runtime_resume = gk20a_pm_runtime_resume, .runtime_suspend = gk20a_pm_runtime_suspend, .resume = gk20a_pm_resume, .suspend = gk20a_pm_suspend, } Move gk20a_busy() to use runtime checks of pm_runtime_enabled() instead of using compile time checks on CONFIG_PM Clean up some pm_domain related code Remove use of gk20a_pm_enable/disable_clk() since this should be already done in platform specific unrailgate()/ railgate() APIs Fix "railgate_delay" and "railgate_enable" sysfs to use runtime PM calls For VGPU, disable runtime PM during vgpu_pm_init() With this, we will initialize vgpu with vgpu_pm_finalize_poweron() upon first call to gk20a_busy() Jira DNVGPU-57 Change-Id: I6013e33ae9bd28f35c25271af1239942a4fa0919 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1163216 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support in-kernel vidmem mappingsKonsta Holtta2016-07-06
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: I46cf87e27229123016727339b9349d5e2c835b3e Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169308 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: initial support for vidmem aperturesKonsta Holtta2016-07-05
| | | | | | | | | | | | | | add gk20a_aperture_mask() for memory target selection now that buffers can actually be allocated from vidmem, and use it in all cases that have a mem_desc available. Jira DNVGPU-76 Change-Id: I4353cdc6e1e79488f0875581cfaf2a5cfb8c976a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169306 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Revamp semaphore supportAlex Waterman2016-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revamp the support the nvgpu driver has for semaphores. The original problem with nvgpu's semaphore support is that it required a SW based wait for every semaphore release. This was because for every fence that gk20a_channel_semaphore_wait_fd() waited on a new semaphore was created. This semaphore would then get released by SW when the fence signaled. This meant that for every release there was necessarily a sync_fence_wait_async() call which could block. The latency of this SW wait was enough to cause massive degredation in performance. To fix this a fast path was implemented. When a fence is passed to gk20a_channel_semaphore_wait_fd() that is backed by a GPU semaphore a semaphore acquire is directly used to block the GPU. No longer is a sync_fence_wait_async() performed nor is there an extra semaphore created. To implement this fast path the semaphore memory had to be shared between channels. Previously since a new semaphore was created every time through gk20a_channel_semaphore_wait_fd() what address space a semaphore was mapped into was irrelevant. However, when using the fast path a sempahore may be released on one address space but acquired in another. Sharing the semaphore memory was done by making a fixed GPU mapping in all channels. This mapping points to the semaphore memory (the so called semaphore sea). This global fixed mapping is read-only to make sure no semaphores can be incremented (i.e released) by a malicious channel. Each channel then gets a RW mapping of it's own semaphore. This way a channel may only acquire other channel's semaphores but may both acquire and release its own semaphore. The gk20a fence code was updated to allow introspection of the GPU backed fences. This allows detection of when the fast path can be taken. If the fast path cannot be used (for example when a fence is sync-pt backed) the original slow path is still present. This gets used when the GPU needs to wait on an event from something which only understands how to use sync-pts. Bug 1732449 JIRA DNVGPU-12 Change-Id: Ic0fea74994da5819a771deac726bb0d47a33c2de Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1133792 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add init_preemption_state gr methodThomas Fleury2016-06-28
| | | | | | | | | | | | | | | | | | This method is called when setting up gr hardware. It is meant to adjust preemption parameters. Bug 1593548 Jira VFND-1894 Change-Id: I0f5aa3212bec3058a0493366bed6fe2a365c9542 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1162625 (cherry picked from commit c2e6d12570af28b3aae087401d7f670df40d40bd) Reviewed-on: http://git-master/r/1166987 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add QoS notifier for common clk frameworkDeepak Nibade2016-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Define specific QoS notifier for common clk framework and protect it with CONFIG_COMMON_CLK This new API will first get min/max requirements from pm_qos and set min/max freq values in devfreq A call to update_devfreq() will then ensure that new estimated frequency is clipped appropriately between min and max values This also ensures that frequency is set along with all the book-keeping Add below platform specific notifier callback and use it with pm_qos_add_notifier() int (*qos_notify)() If qos_notify is set, then only register the callback We currently support only one qos_id which is treated as notifier for min frequency Remove dependency on qos_id, and use appropriate QoS APIs like pm_qos_read_min/max_bound() Store devfreq's min/max frequency in struct gk20a for reference Bug 1772462 Change-Id: I63d6d17451d19c9d376b67df7db775b38929287d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1161161 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: Add CE engine to engine listTerje Bergstrom2016-06-24
| | | | | | | | | | | | | | | | | Add CE engine to vgpu engine list. CE engine is defined differently for different GPUs, so we also add HAL for initializing the engine info. Bug 1780185 Change-Id: I5ae265551feac08d0c4d45402dd3277514e62b2d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1169720 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Tested-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com>
* gpu: nvgpu: update get_netlist_name ops declarationMahantesh Kumbar2016-06-21
| | | | | | | | | | | | | | -update get_netlist_name ops declaration to support to load GPU FW based on GPU-ARCH -"GAxxx" string used to get size for "gm204/" or "gm206/" which will added to NETIMAGE path like "gm204/NETC_img.bin" Change-Id: I5bfa13df014533a885c4328d3c767e51c29f9255 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1166783 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add read_ptimer to gopsRichard Zhao2016-06-16
| | | | | | | | | | | | | | | Move all places that read ptimer to use the callback. It's for add vgpu implementation of read ptimer. Bug 1395833 Change-Id: Ia339f2f08d75ca4969a443fffc9a61cff1d3d2b7 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1159587 (cherry picked from commit a01f804684f875c9cffc31eb2c1038f2f29ec66f) Reviewed-on: http://git-master/r/1158449 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix gk20a_busy() in debug dumpTerje Bergstrom2016-06-14
| | | | | | | | | | | | | | | | | When debug dump is called from an interrupt thread, we do not want to call gk20a_busy() because it causes race in case rail gating is being engaged at the same time. It has to be called from all debugfs paths. Bug 200198908 Bug 1770522 Change-Id: I7eda7d029b0a59cce0320ecc1b750dc2f4d7ccf0 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1163440 GVS: Gerrit_Virtual_Submit Tested-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* Revert "gpu: nvgpu: take power refcount in ISR"Terje Bergstrom2016-06-09
| | | | | | | | | This reverts commit 2219f38727ffa17291e15c1898bd3e65f43d09fd. It leaves GPU in on state for some tests that require powering down GPU. Change-Id: I79d44fed729e98692021c57bbeff6a0ef2e8c983 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1161846
* gpu: nvgpu: detect vidmem configuration from HWKonsta Holtta2016-06-08
| | | | | | | | | | | | | Read video memory size from hardware during initialization for devices that support it. JIRA DNVGPU-14 Change-Id: If190f2d89f7148520ee274ca674f972987c8056d Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1157215 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: take power refcount in ISRDeepak Nibade2016-06-08
| | | | | | | | | | | | | | | | | | | | | | | | We sometimes see race conditions where power refcount is zero during ISR or bottom half. If bottom half calls gk20a_busy(), it will lead to boot up of GPU, but it is also possible that we are already trying to poweroff GPU since power refcount is zero Fix this by taking a power refcount with gk20a_busy_noresume() in ISR and then dropping this refcount at the end of bottom half Add new API gk20a_idle_nosuspend() to drop a refcount without initiating suspend Bug 200198908 Bug 1770522 Change-Id: Iec3d4dc8d468f49b71919d2bbc327da48b97bcab Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1160035 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add multiple engine and runlist supportLakshmanan M2016-06-07
| | | | | | | | | | | | | | | | | | | | | | | This CL covers the following modification, 1) Added multiple engine_info support 2) Added multiple runlist_info support 3) Initial changes for ASYNC CE support 4) Added ASYNC CE interrupt handling support for gm206 GPU family 5) Added generic mechanism to identify the CE engine pri_base address for gm206 (CE0, CE1 and CE2) 6) Removed hard coded engine_id logic and made generic way 7) Code cleanup for readability JIRA DNVGPU-26 Change-Id: I2c3846c40bcc8d10c2dfb225caa4105fc9123b65 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1155963 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: WPR & PMU interface updateMahantesh Kumbar2016-06-04
| | | | | | | | | | | | | Update WPR interface & PMU interface to support latest ACR/PMU ucode versions Change-Id: I4d1bd7a5c43751e96c1db58832cd316006d56954 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1158070 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: update HAL of ACR BLMahantesh Kumbar2016-06-01
| | | | | | | | | | | | | -update HAL of ACR BL which can support gm204/gm206 and DMATRFBASE method to global JIRA DNVGPU-10 Change-Id: I56fc7ce040dadb6473f6f375ee6ce90783a046ad Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1154954 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: add tsg set timeslice supportRichard Zhao2016-05-31
| | | | | | | | | | | | | | Bug 1702773 JIRA VFND-1496 Change-Id: Ice570df78d974fa59f2a932caf0e6249b13493a1 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1144929 (cherry picked from commit 8b6ec996f3773e497a040a8fe4148e01e8dc35fa) Reviewed-on: http://git-master/r/1150705 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add tsg support for vgpuRichard Zhao2016-05-31
| | | | | | | | | | | | | | | | | | - make tsg_gk20a.c call HAL for enable/disable channels - add preempt_tsg HAL callbacks - add tsg bind/unbind channel HAL callbacks - add according tsg callbacks for vgpu Bug 1702773 JIRA VFND-1003 Change-Id: I2cba74b3ebd3920ef09219a168e6433d9574dbe8 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1144932 (cherry picked from commit c3787de7d38651d46969348f5acae2ba86b31ec7) Reviewed-on: http://git-master/r/1126942 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add device_info_data supportLakshmanan M2016-05-27
| | | | | | | | | | | | | Added device_info_data parsing support for maxwell GPU series. JIRA DNVGPU-26 Change-Id: I06dbec6056d4c26501e607c2c3d67ef468d206f4 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: http://git-master/r/1151602 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: move & rename acr_gm20b to acr_descMahantesh Kumbar2016-05-26
| | | | | | | | | | | | | acr_gm20b renamed to acr_desc to support multiple gpu chips JIRA DNVGPU-10 Change-Id: Ib3b38d5845043f026ddc365a682b7bb454463326 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1152401 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: secure boot HAL updateMahantesh Kumbar2016-05-26
| | | | | | | | | | | | | Updated/added secure boot HAL with methods required to support multiple GPU chips. JIRA DNVGPU-10 Change-Id: I343b289f2236fd6a6b0ecf9115367ce19990e7d5 Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/1151784 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move PCI devnodes to own directoryTerje Bergstrom2016-05-25
| | | | | | | | | | | | | | | | | | To be able to scan, PCI devnodes need to be in a directory with read permission. By default /dev is read protected by SELinux policy. Move the devnodes to their own directory so that reading this one directory can be allowed. At the same time rename the nodes to start with string "card-". JIRA DNVGPU-54 Change-Id: I0df4ced08afd1f3a468e983d07395ffcb8050365 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1152745 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Konsta Holtta <kholtta@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Add support for gm204 and gm206Terje Bergstrom2016-05-23
| | | | | | | | | | | Add support for chips gm204 and gm206. Adds also support for reading VBIOS and booting devinit and pre-os images on PMU. Change-Id: I4824b44245611e5379ace62793cc37158048f432 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120467 GVS: Gerrit_Virtual_Submit Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: update trace for sched paramsThomas Fleury2016-05-21
| | | | | | | | | | | | | | Use an inline function instead of a macro to "expand" all channel parameters. Jira EVLR-244 Jira EVLR-318 Change-Id: I4e8c5ee6bc9da36564af171be809f50dd2dfd439 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1150050 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add HAL op for PMU resetTerje Bergstrom2016-05-20
| | | | | | | | | Sequence to reset PMU is different for iGPU and dGPU. Specialize and implement iGPU version. Change-Id: I5b9ff2c018a736bc9e27b90d0942c52706b12a12 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1150540
* gpu: nvgpu: hwpm broadcast register supportPeter Daifuku2016-05-19
| | | | | | | | | | | | | | | | | | | | | | | Add support for hwpm broadcast registers (ltc and lts) In gr_gk20a_find_priv_offset_in_buffer, replace "Unknown address type" error with informational message: gr_gk20a_exec_ctx_ops calls gk20a_get_ctx_buffer_offsets and if that fails, calls gr_gk20a_get_pm_ctx_buffer_offsets; HWPM registers will fail the first call, so an error or warning is overkill. Bug 1648200 Change-Id: I197b82579e9894652add4ff254418f818981415a Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1131365 (cherry picked from commit 9f30a92c5d87f6dadd34cc37396a6b10e3a72751) Reviewed-on: http://git-master/r/1133628 (cherry picked from commit 7eb7cfd998852ba7f7c4c40d3db286f66e83ab3a) Reviewed-on: http://git-master/r/1127749 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Read all fields of device_infoTerje Bergstrom2016-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | We were not using the engine_type field in device info, and the code did not handle chained entries properly. The code assumed that first entry is for graphics and second for CE, which is not always true. Improve the code to go through all entries of device_info, and preserve values across entries until we reach the last entry. Only last entry triggers a write to fifo engine info. There can also be multiple engines with same type, so accumulate interrupts and reset ids from all of them. As the code got fixed, now it reads the engine enum correctly from hardware. We used to compare that against CE0, but we should compare against CE2. gk20a_fifo_reset_engine() uses wrong constants - it is passed a internal numbering of engines, but it compares them against hardware engine enum. Change-Id: Ia59273921c602d2a090f7a5b1404afb0fca2532c Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1147746 Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Fix CWD floorsweep programmingTerje Bergstrom2016-05-16
| | | | | | | | | | | | | Program CWD TPC and SM registers correctly. The old code did not work when there are more than 4 TPCs. Refactor init_fs_mask to reduce code duplication. Change-Id: Id93c1f8df24f1b7ee60314c3204e288b91951a88 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1143697 GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
* gpu: nvgpu: Do not program max ways evictTerje Bergstrom2016-05-13
| | | | | | | | | | | Setting max_ways_evict reserves some of L2 for CB. In gk20a CB is in dedicated RAM, so we don't need to reserve space for it. The code gets invoked only on gk20a. Change-Id: Ib8efec8c5e90c135bd0c10bb1eaa3f797ec68698 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1144993
* gpu: nvgpu: refactor gk20a_mem_{wr,rd} for vidmemKonsta Holtta2016-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support vidmem, pass g and mem_desc to the buffer memory accessor functions. This allows the functions to select the memory access method based on the buffer aperture instead of using the cpu pointer directly (like until now). The selection and aperture support will be in another patch; this patch only refactors these accessors, but keeps the underlying functionality as-is. gk20a_mem_{rd,wr}32() work as previously; add also gk20a_mem_{rd,wr}() for byte-indexed accesses, gk20a_mem_{rd,wr}_n() for memcpy()-like functionality, and gk20a_memset() for filling buffers with a constant. The 8 and 16 bit accessor functions are removed. vmap()/vunmap() pairs are abstracted to gk20a_mem_{begin,end}() to support other types of mappings or conditions where mapping the buffer is unnecessary or different. Several function arguments that would access these buffers are also changed to take a mem_desc instead of a plain cpu pointer. Some relevant occasions are changed to use the accessor functions instead of cpu pointers without them (e.g., memcpying to and from), but the majority of direct accesses will be adjusted later, when the buffers are moved to support vidmem. JIRA DNVGPU-23 Change-Id: I3dd22e14290c4ab742d42e2dd327ebeb5cd3f25a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1121143 Reviewed-by: Ken Adams <kadams@nvidia.com> Tested-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: add fuse overrides for tpc disablingRichard Zhao2016-05-10
| | | | | | | | | | | | | | | | | - add fuse_override in gops. Implement it starting from gm20b. - set cwd fs register, so cuda won't use disabled TPCs Bug 1757262 Bug 200169697 Change-Id: If7bac58bd3a6bcf2925197ea5b7c2d10a77e0933 Signed-off-by: Richard Zhao <rizhao@nvidia.com> (cherry picked from commit 66cde7724815e9e5e85ab9b07fc985a78530222f) Reviewed-on: http://git-master/r/1132177 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Adeel Raza <araza@nvidia.com> Tested-by: Adeel Raza <araza@nvidia.com>
* gpu: nvgpu: add supported preemptions to gpu characteristicsDeepak Nibade2016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | Add below flag fields to gpu characteristics to indicate supported and default preemption modes on platform for graphics and compute __u32 graphics_preemption_mode_flags; __u32 compute_preemption_mode_flags; __u32 default_graphics_preempt_mode; __u32 default_compute_preempt_mode; Add struct nvgpu_preemption_modes_rec to struct gr_gk20a to store these values locally Use platform specific get_preemption_mode_flags() to get the flags and define gk20a/gm20b specific get_preemption_mode_flags() API Bug 1646259 Change-Id: I80193c0d988dc93bd96585f9aa631fd817f4dfa3 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1133595 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: separate IOCTL to set preemption modeDeepak Nibade2016-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add separate IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE to allow setting preemption modes from UMD Define preemption modes in nvgpu.h and use them everywhere Remove mode definitions from mm_gk20a.h Also, we support setting only one preemption mode in a channel But it is possible to have multiple preemption modes (one from graphics and one from compute) set simultaneously Hence, update struct gr_ctx_desc to include two separate preemption modes (graphics_preempt_mode and compute_preempt_mode) API NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE also supports setting two separate preemption modes i.e. one for graphics and one for compute Make necessary changes in code to support two preemption modes Bug 1646259 Change-Id: Ia1dea19e609ba8cc0de2f39ab6c0c4cd6b0a752c Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1131805 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* nvgpu: vgpu: create fifo.force_reset_ch in gpu_opsHaley Teng2016-05-09
| | | | | | | | | | | | | | gk20a_fifo_force_reset_ch() does not support vgpu now, so we need to create a function pointer in gpu_ops and assign it differently for vgpu and non-vgpu. Bug 200184349 Change-Id: I5f8f4f731b4b970c4ff8de65531f25568e7691b6 Signed-off-by: Haley Teng <hteng@nvidia.com> Reviewed-on: http://git-master/r/1130420 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add trace and debugfs for sched paramsThomas Fleury2016-05-05
| | | | | | | | | | | | JIRA EVLR-244 JIRA EVLR-318 Change-Id: Ie95f42212dadcf2d0c1737eeb28812afb03b712f Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1120603 GVS: Gerrit_Virtual_Submit Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Ken Adams <kadams@nvidia.com>
* gpu: nvgpu: Add PCIe device supportTerje Bergstrom2016-04-27
| | | | | | | | | | Add support for probing PCIe graphics cards. JIRA DNVGPU-7 Change-Id: Iad3d31a1dc0ca6575d8a9916857022cac9181948 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1127684
* gpu: nvgpu: Idle GR before calling PMU ZBC saveTerje Bergstrom2016-04-26
| | | | | | | | | | | | | | | On gk20a when PMU is updating ZBC colors it is reading them from L2. But L2 has one port, and ZBC reads can race with other transactions. Idle graphics before sending PMU the ZBC_UPDATE request. Also makes pmu_save_zbc a HAL, because PMU ucode has changes to bypass this problem on some chips. Bug 1746047 Change-Id: Id8fcd6850af7ef1d8f0a6aafa0fe6b4f88b5f2d9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1129017
* gpu: nvgpu: IOCTL to suspend/resume contextDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add below IOCTL to suspend/resume a context NVGPU_DBG_GPU_IOCTL_SUSPEND_RESUME_CONTEXTS: Suspend sequence : - disable ctxsw - loop through list of channels - if channel is ctx resident, suspend all SMs - otherwise, disable channel/TSG - enable ctxsw Resume sequence : - disable ctxsw - loop through list of channels - if channel is ctx resident, resume all SMs - otherwise, enable channel/TSG - enable ctxsw Bug 200156699 Change-Id: Iacf1bf7877b67ddf87cc6891c37c758a4644b014 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120332 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support binding multiple channels to a debug sessionDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently bind only one channel to a debug session But some use cases might need multiple channels bound to same debug session Add this support by adding a list of channels to debug session. List structure is implemented as struct dbg_session_channel_data List node dbg_s_list_node is currently defined in struct dbg_session_gk20a. But this is inefficient when we need to add debug session to multiple channels Hence add new reference structure dbg_session_data to store dbg_session pointer and list entry For each NVGPU_DBG_GPU_IOCTL_BIND_CHANNEL call, create two reference structure dbg_session_channel_data for channel and dbg_session_data for debug session and bind them together Define API nvgpu_dbg_gpu_get_session_channel() which will get first channel in the list of debug session Use this API wherever we refer to channel bound to debug session Remove dbg_sessions define in struct gk20a since it is not being used anywhere Add new API NVGPU_DBG_GPU_IOCTL_UNBIND_CHANNEL to support unbinding of channel from debug sesssion Bug 200156699 Change-Id: I3bfa6f9cd5b90e7254a75c7e64ac893739776b7f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120331 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu; nvgpu: IOCTL to write/clear SM error statesDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | Add below IOCTLs to write/clear SM error states NVGPU_DBG_GPU_IOCTL_CLEAR_SINGLE_SM_ERROR_STATE NVGPU_DBG_GPU_IOCTL_WRITE_SINGLE_SM_ERROR_STATE Bug 200156699 Change-Id: I89e3ec51c33b8e131a67d28807d5acf57b3a48fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120330 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support storing/reading single SM error stateDeepak Nibade2016-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to store error state of single SM before preprocessing SM exception Error state is stored as : struct nvgpu_dbg_gpu_sm_error_state_record { u32 hww_global_esr; u32 hww_warp_esr; u64 hww_warp_esr_pc; u32 hww_global_esr_report_mask; u32 hww_warp_esr_report_mask; } Note that we can safely append new fields to above structure in the future if required Also, add IOCTL NVGPU_DBG_GPU_IOCTL_READ_SINGLE_SM_ERROR_STATE to support reading SM's error state by user space Bug 200156699 Change-Id: I9a62cb01e8a35c720b52d5d202986347706c7308 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1120329 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Wait for BAR1 bindTerje Bergstrom2016-04-15
| | | | | | | | | Wait for BAR1 bind to complete before continuing. The register to wait exists Maxwell onwards. Change-Id: Ie3736033fdb748c5da8d7a6085ad6d63acaf41f5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1123941
* gpu: nvgpu: Add litter values HALTerje Bergstrom2016-04-15
| | | | | | | | | Move per-chip constants to be returned by a chip specific function. Implement get_litter_value() for each chip. Change-Id: I2a2730fce14010924d2507f6fa15cc2ea0795113 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1121383
* gpu: nvgpu: make tpc_fs_mask work on production boardRichard Zhao2016-04-14
| | | | | | | | | | | | | | | On production fused boards, it uses gr_fe_tpc_fs_r() to mask TPCs, rather than fues. Bug 1734150 Change-Id: I7b4eb428f1ad0cf841a57214e0c8c1e8f17b2c5a Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1111630 (cherry picked from commit 869ea54967812e03d9f1e69775ca56fd6459216c) Reviewed-on: http://git-master/r/1122121 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* debugfs: Pass bool pointer to debugfs_create_bool()Alex Van Brunt2016-04-14
| | | | | | | | | | | | | | Port the change 621a5f7ad9cd1ce7933f1d302067cbd58354173c from kernel.org to the nvgpu driver. bug 200187033 Change-Id: I7d742f614161d9d4ed59c4216d7c730d57ef4116 Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1118397 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Support GPUs with only one interrupt lineTerje Bergstrom2016-04-13
| | | | | | | | | | | Not all GPUs have stalling and non-stalling interrupt. Support ones with just one interrupt line. Change-Id: I0f1e8faa5b353b8d1b10691375bd853152379a3a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120470 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
* gpu: nvgpu: vgpu: add fecs trace supportRichard Zhao2016-04-11
| | | | | | | | | | | | | | Bug 1648908 Change-Id: I7901e7bce5f7aa124a188101dd0736241d87bd53 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1031861 Reviewed-on: http://git-master/r/1121261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: virtualized SMPC/HWPM ctx switchPeter Daifuku2016-04-08
| | | | | | | | | | | | | | Add support for SMPC and HWPM context switching when virtualized Bug 1648200 JIRASW EVLR-219 JIRASW EVLR-253 Change-Id: I80a1613eaad87d8510f00d9aef001400d642ecdf Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1122034 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use device instead of platform_deviceTerje Bergstrom2016-04-08
| | | | | | | | | Use struct device instead of struct platform_device wherever possible. This allows adding other bus types later. Change-Id: I1657287a68d85a542cdbdd8a00d1902c3d6e00ed Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1120466