summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: sysfs mode for allowing access to registerssujeet baranwal2015-03-18
| | | | | | | | | | | | | Through this sysfs entry, the register space becomes accessible. This is be accessible root-only. Bug 1523403 Change-Id: Ia46f130a0cfd8324c5b675d19e7cbfba9dcb17ca Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com> Reviewed-on: http://git-master/r/454198 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: Allow large surfacesArto Merilainen2015-03-18
| | | | | | | | | | | | | Currently cde swizzling application forces upper limit to surface size. The limitation is artificial (i.e. nothing prevents shader handling larger surfaces). Therefore, make the error condition a warning. Change-Id: I8f0cda7f2e9e9ecc90589e5a4b4091abcb513482 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/454591 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gk20a: cde: Add base_post_divide paramArto Merilainen2015-03-18
| | | | | | | | | | | This patch adds a parameter to communicate the compression bit backing store address we write to the hardware. Change-Id: Ibc0e3d8304e893ddf15b4e03b405c7d85a73e95b Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/454510 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: cde: Allow passing shader parametersArto Merilainen2015-03-18
| | | | | | | | | | | | | | This patch adds support to pass shader parameters through debugfs. These parameters are required to change the shader behaviour without reloading the firmware image. Change-Id: Ib0ff773d9425aa9fcc58655717cccafcfbaf7bfd Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/453462 Reviewed-by: Jussi Rasanen <jrasanen@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Jussi Rasanen <jrasanen@nvidia.com>
* gpu: nvgpu: Dynamic compbit store size in asimTerje Bergstrom2015-03-18
| | | | | | | | | We have hardcoded compbit backing store to cover 1MB of memory in ASIM. Remove that hard coding and use the total memory size instead. Change-Id: Ibb5c6ae88015960fa360ddd5f7bba05949d4da7b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/450313
* gpu: nvgpu: Query ctx image size only onceTerje Bergstrom2015-03-18
| | | | | | | | | | | Newer netlist does not require image size queries to boot. Save 2ms from GPU boot time by skipping it if we know the sizes. Bug 1435870 Change-Id: Ie1b13c8a6e420adf06e635bde8b469385e1d5c60 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/419873
* gpu: nvgpu: Add boost once GPU is initializedTerje Bergstrom2015-03-18
| | | | | | | | | | | Workaround for GPU hang if boost turns GPU on before it is initialized. Bug 1435870 Change-Id: I07d0617049612344ca7c494da8cb8d75789984e5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/453375
* gpu: nvgpu: remove redundant lockDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | "isr_enable_lock" was used to protect pmu's isr_enabled flag and pmu enable/disable calls Instead of this extra lock, we can reuse "isr_mutex" for this purpose Bug 200014542 Bug 200014887 Change-Id: Ifbb7d6108effc132266a20517820e470d52a7110 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/453348 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add channel enable/disable ioctlsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | Add below ioctls for channels 1. NVHOST_IOCTL_CHANNEL_ENABLE To enable the channel 2. NVHOST_IOCTL_CHANNEL_DISABLE To disable the channel 3. NVHOST_IOCTL_CHANNEL_PREEMPT To preempt the channel (Not supported for a channel in TSG) Bug 1514064 Change-Id: Ie9315f9742bb27efb22f993799c51a1ecda91756 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/449229 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix semaphore refcountingArto Merilainen2015-03-18
| | | | | | | | | | This patch fixes a refcounting issue in semaphore handling. Change-Id: I03327c60ed6923a90663f0b845566e81af4b94d4 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/453056 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: verify runnable channel count in TSGDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | In runlist we first write channel count in TSG entry and then follow those many channel entries If no. of channel entries does not match to count then it is considered as error To detect this, add a counter while adding channel entries and give warning if channel count does not match with this counter bug 1470692 Change-Id: I4bbfd9b696fbfafa25dffb27979373f057a7f35a Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/449228 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: do not touch runlist during recoveryDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | Currently we clear the runlist and re-create it in scheduled work during fifo recovery process But we can post-pone this runlist re-generation for later time i.e. when channel is closed Hence, remove runlist locks and re-generation from handle_mmu_fault() methods. Instead of that, disable gr fifo access at start of recovery and re-enable it at end of recovery process. Also, delete scheduled work to re-create runlist. Re-enable EPLG and fifo access in finish_mmu_fault_handling() itself. bug 1470692 Change-Id: I705a6a5236734c7207a01d9a9fa9eca22bdbe7eb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/449225 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: rework TSG's channel listDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | Modify TSG's channel list as "ch_list" for all channels instead of "ch_runnable_list" for only runnable list We can traverse this list and check runnable status of channel in active_channels to get runnable channels Remove below APIs as they are no longer required : gk20a_bind_runnable_channel_to_tsg() gk20a_unbind_channel_from_tsg() While closing the channel, call gk20a_tsg_unbind_channel() to unbind the channel from TSG bug 1470692 Change-Id: I0178fa74b3e8bb4e5c0b3e3b2b2f031491761ba7 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/449227 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add sw shadow for load valueArto Merilainen2015-03-18
| | | | | | | | | | | | | | | | Reading the load value may increase CPU power consumption temprorarily. In most cases we are ok with a value that was read a moment earlier. This patch introduces a software shadow for gpu load. The shadow is updated before starting scaling and all scaling code paths use the sw shadow. Change-Id: I53d2ccb8e7f83147f411a14d3104d890dd9af9a3 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/453347 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add compression state IOCTLsLauri Peltonen2015-03-18
| | | | | | | Bug 1409151 Change-Id: I29a325d7c2b481764fc82d945795d50bcb841961 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
* gpu: nvgpu: Add support for FECS errorsTerje Bergstrom2015-03-18
| | | | | | | | | | | Add retrieving error code for FECS errors. Change-Id: I7d9dfc4723376272edb2e5b2ef06f71de1a06889 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/450351 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Chris Dragan <kdragan@nvidia.com> Tested-by: Chris Dragan <kdragan@nvidia.com>
* nvgpu:Added PROD settings for ELPG sequencingMahantesh Kumbar2015-03-18
| | | | | | | | | Added PROD settings for ELPG sequencing registers Bug 200023161 Change-Id: Id313f9bc800d3a57f45aff0f0f609887565971be Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
* arm: tegra: Register tegra-throttle cdev as driverArun Kumar Swain2015-03-18
| | | | | | | | | | | | | | 1. Register tegra-throttle cooling device as a platform driver. 2. Obtain all the platform data (throtlle table info) for all instances of blanced-throtlled cdev from device tree and register them. Change-Id: Ie92685eea3eb5cb18068b195adc9ab5f83762399 Signed-off-by: Arun Kumar Swain <arswain@nvidia.com> Reviewed-on: http://git-master/r/449104 Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com> Tested-by: Diwakar Tundlam <dtundlam@nvidia.com>
* gpu: nvgpu: fix compbit_store page allocationEdgardo Handal2015-03-18
| | | | | | | | | | | Allocate enough pages in the case that compbit_backing_size is not a power of two. Change-Id: Iaa2da66a3d1bd86ac746ed619a7f37e9379904db Signed-off-by: Edgardo Handal <ehandal@nvidia.com> Reviewed-on: http://git-master/r/449460 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu:nvgpu:sysfs node to enable/disable aelpgMahantesh Kumbar2015-03-18
| | | | | | | | | Added "aelpg_enable" sysfs node to enable/disable aelpg. Bug 1464737 Change-Id: Ia0eadbea59e2f9373ab5f413fa6e28780aff3c3c Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
* gpu: nvgpu: return error from mutex_acquire()Deepak Nibade2015-03-18
| | | | | | | | | | return error from pmu_mutex_acquire() and release() if pmu->initialized is not set Bug 1533644 Change-Id: I341a5831bc5beeccb4587668f61c954ce7576226 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: fix error handling for mutex_acquire()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | Currently if pmu_mutex_acquire() fails, we disable ELPG and move ahead. But it is not clear why it is required to disable ELPG in case where we fail to acquire mutex. Hence skip disabling ELPG if mutex_acquire() fails Bug 1533644 Change-Id: I7e8e99a701d0ba071eb31ac17582b04072ee55eb Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/448131 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fix compbit base calculationArto Merilainen2015-03-18
| | | | | | | | | | | | | Compression bit base was calculated incorrectly in cases where number of LTCs was not 1. This patch fixes the code. Change-Id: I25e3fa7446b238202d93ce8a72ed919d11fb6e30 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/449281 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Tested-by: Jussi Rasanen <jrasanen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Use noinline_for_stack to avoid GCOV build breakTuomas Tynkkynen2015-03-18
| | | | | | | | | | | | | | | | | | | | | If code coverage is enabled on GCC 4.7, the kernel build fails in gk20a_init_kind_attr() since GCC decides to inline almost everything in this file into it, leading to a massive stack frame with over kilobyte's worth of temporary variables generated by gcov, leading to this error: kind_gk20a.c: In function 'gk20a_init_kind_attr': kind_gk20a.c:424:1: error: the frame size of 1232 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] (Just removing the inline keyword doesn't work, as GCC still decides to inline it, so noinline_for_stack is actually required.) Change-Id: I819fd2a5b20581f0ac60e1ee490899c977379151 Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com> Reviewed-on: http://git-master/r/448914 Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com> Tested-by: Juha Tukkinen <jtukkinen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: CDE supportArto Merilainen2015-03-18
| | | | | | | | | | | | | | This patch adds support for executing a precompiled GPU program to allow exporting GPU buffers to other graphics units that have color decompression engine (CDE) support. Bug 1409151 Change-Id: Id0c930923f2449b85a6555de71d7ec93eed238ae Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/360418 Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Attach compression state to dma-bufLauri Peltonen2015-03-18
| | | | | | | | | | | Bug 1509620 Change-Id: I694fe43ef5d1f4f329d997a3d60e006785374cc3 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/439849 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a_fence typeLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | When moving compression state tracking and compbit management ops to kernel, we need to attach a fence to dma-buf metadata, along with the compbit state. To make in-kernel fence management easier, introduce a new gk20a_fence abstraction. A gk20a_fence may be backed by a semaphore or a syncpoint (id, value) pair. If the kernel is configured with CONFIG_SYNC, it will also contain a sync_fence. The gk20a_fence can easily be converted back to a syncpoint (id, value) parir or sync FD when we need to return it to user space. Change gk20a_submit_channel_gpfifo to return a gk20a_fence instead of nvhost_fence. This is to facilitate work submission initiated from kernel. Bug 1509620 Change-Id: I6154764a279dba83f5e91ba9e0cb5e227ca08e1b Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/439846 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove unused code in allocatorTerje Bergstrom2015-03-18
| | | | | | | | | | Remove functions that are not used in gk20a allocator. Bug 1523403 Change-Id: I36b2b236258d61602cb3283b59c43b40f237d514 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/432174
* gpu: nvgpu: gk20a: add address check in allocator.Kevin Huang2015-03-18
| | | | | | | | | | | | | | Check the address range before allocation to avoid illegal address range. Bug 1523403 Change-Id: Iff171399a980b69f9b1a18eea5bc37eff4c5d749 Signed-off-by: Kevin Huang <kevinh@nvidia.com> Reviewed-on: http://git-master/r/437871 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gm20b: Store LTC configurationArto Merilainen2015-03-18
| | | | | | | | | Change-Id: Ia780e6a7cb3579f0d6ed2dca9949a349799535fd Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/448115 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: skip WFI for KEPLER_C channelsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | In channel_finish(), we submit WFI for all the channels including channels with KEPLER_C class. Since there is no need to submit WFI for channels with KEPLER_C class, we can optimize to skip submitting WFI and directly wait on last submit fence Bug 1534272 Change-Id: I3838416cf22122728e7f1008e01d77b14a35deba Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: poweron host1x explicitlyDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Currently gk20a gets reference of host1x via phandle in Device Tree. But runtime PM does not seem to be handling power dependencies too well in this case and hence some times host1x is off when we need it. To fix this, exlicitly power on host1x while powering gpu up. Do this via "busy" and "idle" callbacks from gk20a_platform Bug 1534272 Bug 200022536 Change-Id: Ia562ee19722cfc8edc5626a5a058ab8edfe3d206 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* nvgpu: new gpmu ucode compatibilitySupriya2015-03-18
| | | | | | | | | | | | | | For LS PMU new ucode needs to be used. Ucode has interface header file changes too. This patch also has fixes for pmu dmem copy failure Bug 1509680 Change-Id: I8c7018f889a82104dea590751e650e53e5524a54 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/441734 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use GPU device name in clock get operationAlex Frid2015-03-18
| | | | | | | | | | | | | | Used GPU device name in clock get operation (instead of fixed name), to make operation is common for GK20A and GM20B. Updated clock ids in tegra clock framework accordingly. Bug 1450787 Change-Id: Ifd5b9c3a6fd8db5b06e6dcd989285e8410794803 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/441711 Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com>
* gpu: nvgpu: Make clock operations staticAlex Frid2015-03-18
| | | | | | | | | | | | | Made GK20A and GM20B clock operations static, since they are invoked only via HAL interfaces. Bug 1450787 Change-Id: Ia30218ad4244bd8790b5ef96d1963678d0ba39e1 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/441710 Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com>
* Revert "gpu: nvgpu: return error from mutex_acquire() if pmu not initialized"Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | This reverts commit 50497d4031103df1067f14ce4c1e14b15713efb9. Simply returning error from mutex_acquire() causes the code to call disable_elpg() which decreases elpg refcount But we already have a race condition between pmu initialization where we initialize elpg and runlist update where we call this mutex_acquire and decrease the refcount As a result of this race and returned error we might mess up with the elpg refcount and cause abnormal behaviour Hence revert this change for now until we have clean fix considering this race as well Bug 200024116 Change-Id: Ie64ca36f70aba6b15c2acc235a5d36d13c9025aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/441793 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: Fork GM20B clock from GK20A clockHoang Pham2015-03-18
| | | | | | | | | Bug 1450787 Change-Id: Id7fb699d9129a272286d6bc93e0e95844440a628 Signed-off-by: Hoang Pham <hopham@nvidia.com> Reviewed-on: http://git-master/r/440536 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Init clock debugfs after clock supportAlex Frid2015-03-18
| | | | | | | | | | | | Initialized GK20A clock debugfs after clock support hardware and software are ready. Bug 1450787 Change-Id: I8ec2ef303a84b9151b7ce209a1864f1729382a44 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/440973 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: flush write before unlockingSang-Hun Lee2015-03-18
| | | | | | | | | | | | | | | | | | | - gk20a_enable is reading the clock after unlocking the spinlock to flush any previous write - This could lead to a race if any write afterwards assume the write has been completed already - Read the clock before unlocking to ensure all previous writes have been completed before letting any other thread use gk20a Bug 200007520 Change-Id: I737fbbe825c68b25ca256c4a8ee2b99aa8baf0f5 Signed-off-by: Sang-Hun Lee <sanlee@nvidia.com> Reviewed-on: http://git-master/r/418485 (cherry picked from commit 2aed542a719caa69620766bf2dceefe50626c189) Reviewed-on: http://git-master/r/437842 Reviewed-by: Mitch Luban <mluban@nvidia.com> Tested-by: Mitch Luban <mluban@nvidia.com>
* gpu: nvgpu: Double syncpoint incrementsArto Merilainen2015-03-18
| | | | | | | | | | | | | gm20b/gm20x requires incrementing syncpoints twice to ensure that the data has reached memory in all cases. This patch modifies increment push buffer to account this requirement. Bug 1491360 Change-Id: I5c2899b26ce0e1cdf9408bb9aaa576fc3054480f Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/437675 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Add helpers for backing store accessArto Merilainen2015-03-18
| | | | | | | | | | | | | | This patch adds mm helpers to access compression backing store from in-kernel shader. Bug 1409151 Change-Id: Icb4f6dc0b5a35fdb97bc4221ab3657866f775fae Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440263 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Allow reloading the golden contextArto Merilainen2015-03-18
| | | | | | | | | | | | | | | In cases where a kernel channel dies, we can reload the context by just reloading the golden context buffer. This patch makes necessary infrastructural changes to support this behaviour. Bug 1409151 Change-Id: Ibe6a88bf7acea2d3aced2b86a7a687279075c386 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440262 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: gk20a: Allow in-kernel channel allocArto Merilainen2015-03-18
| | | | | | | | | | | | | | | This patch modifies channel interfaces to allow allocating the channel for kernel use. This is needed if we want to run a shader from kernel space. Bug 1409151 Change-Id: I3544186bb1541120f85e01a19de106ef011c1b11 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: return error from mutex_acquire() if pmu not initializedDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In pmu_mutex_acquire(), we return zero (success) if pmu->initialized is not set Since mutex_acquire() was successful, we then call pmu_mutex_release() If now pmu->initialized is set in some other thread then we proceed to validate the mutex owner and end up causing below warning : pmu_mutex_release: requester 0x00000000 NOT match owner 0x00000008 Hence to fix this return error from mutex_acquire() and mutex_release() if pmu->initialized is not yet set and in that case we proceed to call elpg enable/disable Bug 1533644 Change-Id: Ifbb9e6a8e13f6478a13e3f9d98ced11792cc881f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/439333 GVS: Gerrit_Virtual_Submit Reviewed-by: Naveen Kumar S <nkumars@nvidia.com> Tested-by: Naveen Kumar S <nkumars@nvidia.com> Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: Remove unused GK20A cooling deviceAlex Frid2015-03-18
| | | | | | | | | | | Removed unused, obsolete GK20A cooling device. Bug 1450787 Change-Id: I5b02546d0405dd518ec841d903e650a8d38db8f2 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/437942 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: Split clk_ops for GK20A and GM20BHoang Pham2015-03-18
| | | | | | | | | | | Split clk_ops for GK20A and GM20B into different files Bug 1450787 Change-Id: I34d16c54ac40c70854e80588475434c9e50b51a5 Signed-off-by: Hoang Pham <hopham@nvidia.com> Reviewed-on: http://git-master/r/437771 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Use 1kHz resolution for GPCPLL programmingAlex Frid2015-03-18
| | | | | | | | | | | | | | | | | Used 1kHz resolution (instead of 1 MHz) for GPCPLL programming: limits specifications, calculating GPCPLL settings, storing target frequency values, and proving output from debug monitor. Updated comments in clock header to properly reflect frequency units. Bug 1450787 Change-Id: Ica58f794b82522288f2883c40626d82dbd794902 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/437943 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: force idle if railgate not supportedDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | Add a way to force idle and reset the GPU in case where GPU rail gating is not supported (i.e. platform->can_railgate = false) In this case, we follow below sequence : - once GPU is idle, get runtime reference which enables the clocks - call prepare_poweroff() to save the state explicitly - perform explicit reset assert/deassert - call finalize_poweron() to restore the state - drop the runtime reference taken earlier Bug 1525284 Change-Id: Id5f3ec152093acd585631dfbf785d8e0561f9048 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/435620 GVS: Gerrit_Virtual_Submit Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Tested-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: increase delays in do_idle()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increase the wait delays in do_idle() to 2000 mS and make use of msleep instead of mdelays Also, to check if GPU is rail gated or not, add a do-while() loop which will keep checking the status and bail out as soon as GPU is rail gated This increase in delays is required to allow GPU sufficient time to complete its work and get rail gated These delays are specially needed during stress testing where it is possible that a large amount of GPU work is blocked during do_idle() and then it might take more time to complete it while next do_idle() is waiting for it Also, remove waiting on API gk20a_wait_channel_idle() for each channels since it is sufficient to wait for refcount to be 1 bug 1529160 Change-Id: Ie541485fbdda76d79ae4a75dda928da240fc5d8f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/434192 (cherry picked from commit 5a621bf2aaf3355e1330a662dc98e943d68ef86d) Reviewed-on: http://git-master/r/435133 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: remove redundant busy()/idle() callsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gk20a_busy() call in channel_syncpt_incr() and corresponding gk20a_idle() call in channel_update() are redundant since they are already encapsulated inside another pair of busy/idle calls This busy/idle pair will be called only from submit_gpfifo() and submit_gpfifo() already has its own busy/idle which it preserves for whole path and hence this redundant pair can be removed Also, this prevents a dead lock scenario while do_idle() is in progress as follows : - in submit_gpfifo() we call first gk20a_busy() which acquires busy read semaphore - in do_idle() we acquire busy write semaphore and wait for current jobs to finish - now submit_gpfifo() encounters second gk20a_busy() and requests busy read semaphore again - this results in dead lock where do_idle() is waiting for submit_gpfifo() to complete and submit_gpfifo() is waiting for busy lock held by do_idle() and hence it cannot complete bug 1529160 Change-Id: I96e4368352f693e93524f0f61689b4447e5331ea Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/434191 (cherry picked from commit c4315c6caa42bab72ba6017c7ded25f4e9363dec) Reviewed-on: http://git-master/r/435132 Reviewed-by: Sachin Nikam <snikam@nvidia.com> Tested-by: Sachin Nikam <snikam@nvidia.com>