summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/gk20a
Commit message (Collapse)AuthorAge
* gpu: nvgpu: Attach compression state to dma-bufLauri Peltonen2015-03-18
| | | | | | | | | | | Bug 1509620 Change-Id: I694fe43ef5d1f4f329d997a3d60e006785374cc3 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/439849 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add gk20a_fence typeLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | When moving compression state tracking and compbit management ops to kernel, we need to attach a fence to dma-buf metadata, along with the compbit state. To make in-kernel fence management easier, introduce a new gk20a_fence abstraction. A gk20a_fence may be backed by a semaphore or a syncpoint (id, value) pair. If the kernel is configured with CONFIG_SYNC, it will also contain a sync_fence. The gk20a_fence can easily be converted back to a syncpoint (id, value) parir or sync FD when we need to return it to user space. Change gk20a_submit_channel_gpfifo to return a gk20a_fence instead of nvhost_fence. This is to facilitate work submission initiated from kernel. Bug 1509620 Change-Id: I6154764a279dba83f5e91ba9e0cb5e227ca08e1b Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/439846 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove unused code in allocatorTerje Bergstrom2015-03-18
| | | | | | | | | | Remove functions that are not used in gk20a allocator. Bug 1523403 Change-Id: I36b2b236258d61602cb3283b59c43b40f237d514 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/432174
* gpu: nvgpu: gk20a: add address check in allocator.Kevin Huang2015-03-18
| | | | | | | | | | | | | | Check the address range before allocation to avoid illegal address range. Bug 1523403 Change-Id: Iff171399a980b69f9b1a18eea5bc37eff4c5d749 Signed-off-by: Kevin Huang <kevinh@nvidia.com> Reviewed-on: http://git-master/r/437871 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gm20b: Store LTC configurationArto Merilainen2015-03-18
| | | | | | | | | Change-Id: Ia780e6a7cb3579f0d6ed2dca9949a349799535fd Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/448115 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: skip WFI for KEPLER_C channelsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | In channel_finish(), we submit WFI for all the channels including channels with KEPLER_C class. Since there is no need to submit WFI for channels with KEPLER_C class, we can optimize to skip submitting WFI and directly wait on last submit fence Bug 1534272 Change-Id: I3838416cf22122728e7f1008e01d77b14a35deba Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: poweron host1x explicitlyDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Currently gk20a gets reference of host1x via phandle in Device Tree. But runtime PM does not seem to be handling power dependencies too well in this case and hence some times host1x is off when we need it. To fix this, exlicitly power on host1x while powering gpu up. Do this via "busy" and "idle" callbacks from gk20a_platform Bug 1534272 Bug 200022536 Change-Id: Ia562ee19722cfc8edc5626a5a058ab8edfe3d206 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* nvgpu: new gpmu ucode compatibilitySupriya2015-03-18
| | | | | | | | | | | | | | For LS PMU new ucode needs to be used. Ucode has interface header file changes too. This patch also has fixes for pmu dmem copy failure Bug 1509680 Change-Id: I8c7018f889a82104dea590751e650e53e5524a54 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/441734 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Use GPU device name in clock get operationAlex Frid2015-03-18
| | | | | | | | | | | | | | Used GPU device name in clock get operation (instead of fixed name), to make operation is common for GK20A and GM20B. Updated clock ids in tegra clock framework accordingly. Bug 1450787 Change-Id: Ifd5b9c3a6fd8db5b06e6dcd989285e8410794803 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/441711 Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com>
* gpu: nvgpu: Make clock operations staticAlex Frid2015-03-18
| | | | | | | | | | | | | Made GK20A and GM20B clock operations static, since they are invoked only via HAL interfaces. Bug 1450787 Change-Id: Ia30218ad4244bd8790b5ef96d1963678d0ba39e1 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/441710 Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com>
* Revert "gpu: nvgpu: return error from mutex_acquire() if pmu not initialized"Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | This reverts commit 50497d4031103df1067f14ce4c1e14b15713efb9. Simply returning error from mutex_acquire() causes the code to call disable_elpg() which decreases elpg refcount But we already have a race condition between pmu initialization where we initialize elpg and runlist update where we call this mutex_acquire and decrease the refcount As a result of this race and returned error we might mess up with the elpg refcount and cause abnormal behaviour Hence revert this change for now until we have clean fix considering this race as well Bug 200024116 Change-Id: Ie64ca36f70aba6b15c2acc235a5d36d13c9025aa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/441793 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: Fork GM20B clock from GK20A clockHoang Pham2015-03-18
| | | | | | | | | Bug 1450787 Change-Id: Id7fb699d9129a272286d6bc93e0e95844440a628 Signed-off-by: Hoang Pham <hopham@nvidia.com> Reviewed-on: http://git-master/r/440536 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Init clock debugfs after clock supportAlex Frid2015-03-18
| | | | | | | | | | | | Initialized GK20A clock debugfs after clock support hardware and software are ready. Bug 1450787 Change-Id: I8ec2ef303a84b9151b7ce209a1864f1729382a44 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/440973 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: flush write before unlockingSang-Hun Lee2015-03-18
| | | | | | | | | | | | | | | | | | | - gk20a_enable is reading the clock after unlocking the spinlock to flush any previous write - This could lead to a race if any write afterwards assume the write has been completed already - Read the clock before unlocking to ensure all previous writes have been completed before letting any other thread use gk20a Bug 200007520 Change-Id: I737fbbe825c68b25ca256c4a8ee2b99aa8baf0f5 Signed-off-by: Sang-Hun Lee <sanlee@nvidia.com> Reviewed-on: http://git-master/r/418485 (cherry picked from commit 2aed542a719caa69620766bf2dceefe50626c189) Reviewed-on: http://git-master/r/437842 Reviewed-by: Mitch Luban <mluban@nvidia.com> Tested-by: Mitch Luban <mluban@nvidia.com>
* gpu: nvgpu: Double syncpoint incrementsArto Merilainen2015-03-18
| | | | | | | | | | | | | gm20b/gm20x requires incrementing syncpoints twice to ensure that the data has reached memory in all cases. This patch modifies increment push buffer to account this requirement. Bug 1491360 Change-Id: I5c2899b26ce0e1cdf9408bb9aaa576fc3054480f Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/437675 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Add helpers for backing store accessArto Merilainen2015-03-18
| | | | | | | | | | | | | | This patch adds mm helpers to access compression backing store from in-kernel shader. Bug 1409151 Change-Id: Icb4f6dc0b5a35fdb97bc4221ab3657866f775fae Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440263 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: Allow reloading the golden contextArto Merilainen2015-03-18
| | | | | | | | | | | | | | | In cases where a kernel channel dies, we can reload the context by just reloading the golden context buffer. This patch makes necessary infrastructural changes to support this behaviour. Bug 1409151 Change-Id: Ibe6a88bf7acea2d3aced2b86a7a687279075c386 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440262 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: gk20a: Allow in-kernel channel allocArto Merilainen2015-03-18
| | | | | | | | | | | | | | | This patch modifies channel interfaces to allow allocating the channel for kernel use. This is needed if we want to run a shader from kernel space. Bug 1409151 Change-Id: I3544186bb1541120f85e01a19de106ef011c1b11 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/440261 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com> GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: return error from mutex_acquire() if pmu not initializedDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In pmu_mutex_acquire(), we return zero (success) if pmu->initialized is not set Since mutex_acquire() was successful, we then call pmu_mutex_release() If now pmu->initialized is set in some other thread then we proceed to validate the mutex owner and end up causing below warning : pmu_mutex_release: requester 0x00000000 NOT match owner 0x00000008 Hence to fix this return error from mutex_acquire() and mutex_release() if pmu->initialized is not yet set and in that case we proceed to call elpg enable/disable Bug 1533644 Change-Id: Ifbb9e6a8e13f6478a13e3f9d98ced11792cc881f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/439333 GVS: Gerrit_Virtual_Submit Reviewed-by: Naveen Kumar S <nkumars@nvidia.com> Tested-by: Naveen Kumar S <nkumars@nvidia.com> Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: Remove unused GK20A cooling deviceAlex Frid2015-03-18
| | | | | | | | | | | Removed unused, obsolete GK20A cooling device. Bug 1450787 Change-Id: I5b02546d0405dd518ec841d903e650a8d38db8f2 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/437942 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: Split clk_ops for GK20A and GM20BHoang Pham2015-03-18
| | | | | | | | | | | Split clk_ops for GK20A and GM20B into different files Bug 1450787 Change-Id: I34d16c54ac40c70854e80588475434c9e50b51a5 Signed-off-by: Hoang Pham <hopham@nvidia.com> Reviewed-on: http://git-master/r/437771 Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: Use 1kHz resolution for GPCPLL programmingAlex Frid2015-03-18
| | | | | | | | | | | | | | | | | Used 1kHz resolution (instead of 1 MHz) for GPCPLL programming: limits specifications, calculating GPCPLL settings, storing target frequency values, and proving output from debug monitor. Updated comments in clock header to properly reflect frequency units. Bug 1450787 Change-Id: Ica58f794b82522288f2883c40626d82dbd794902 Signed-off-by: Alex Frid <afrid@nvidia.com> Reviewed-on: http://git-master/r/437943 Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
* gpu: nvgpu: force idle if railgate not supportedDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | Add a way to force idle and reset the GPU in case where GPU rail gating is not supported (i.e. platform->can_railgate = false) In this case, we follow below sequence : - once GPU is idle, get runtime reference which enables the clocks - call prepare_poweroff() to save the state explicitly - perform explicit reset assert/deassert - call finalize_poweron() to restore the state - drop the runtime reference taken earlier Bug 1525284 Change-Id: Id5f3ec152093acd585631dfbf785d8e0561f9048 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/435620 GVS: Gerrit_Virtual_Submit Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Tested-by: Arto Merilainen <amerilainen@nvidia.com>
* gpu: nvgpu: increase delays in do_idle()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increase the wait delays in do_idle() to 2000 mS and make use of msleep instead of mdelays Also, to check if GPU is rail gated or not, add a do-while() loop which will keep checking the status and bail out as soon as GPU is rail gated This increase in delays is required to allow GPU sufficient time to complete its work and get rail gated These delays are specially needed during stress testing where it is possible that a large amount of GPU work is blocked during do_idle() and then it might take more time to complete it while next do_idle() is waiting for it Also, remove waiting on API gk20a_wait_channel_idle() for each channels since it is sufficient to wait for refcount to be 1 bug 1529160 Change-Id: Ie541485fbdda76d79ae4a75dda928da240fc5d8f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/434192 (cherry picked from commit 5a621bf2aaf3355e1330a662dc98e943d68ef86d) Reviewed-on: http://git-master/r/435133 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: remove redundant busy()/idle() callsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gk20a_busy() call in channel_syncpt_incr() and corresponding gk20a_idle() call in channel_update() are redundant since they are already encapsulated inside another pair of busy/idle calls This busy/idle pair will be called only from submit_gpfifo() and submit_gpfifo() already has its own busy/idle which it preserves for whole path and hence this redundant pair can be removed Also, this prevents a dead lock scenario while do_idle() is in progress as follows : - in submit_gpfifo() we call first gk20a_busy() which acquires busy read semaphore - in do_idle() we acquire busy write semaphore and wait for current jobs to finish - now submit_gpfifo() encounters second gk20a_busy() and requests busy read semaphore again - this results in dead lock where do_idle() is waiting for submit_gpfifo() to complete and submit_gpfifo() is waiting for busy lock held by do_idle() and hence it cannot complete bug 1529160 Change-Id: I96e4368352f693e93524f0f61689b4447e5331ea Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/434191 (cherry picked from commit c4315c6caa42bab72ba6017c7ded25f4e9363dec) Reviewed-on: http://git-master/r/435132 Reviewed-by: Sachin Nikam <snikam@nvidia.com> Tested-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: fix race between do_idle() and unrailgate()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | While we are executing do_idle() API, it is possible that unrailgate() gets invoked in midst of idling the GPU and this can result in failure of do_idle() To prevent simultaneous execution of these methods, add a mutex railgate_lock and acquire it during do_idle() and unrailgate() APIs Also, keep this lock held if do_idle() is successful. In success, lock will be released in do_unidle(), otherwise release this lock before returning Note that this lock should not be held in railgate() API since we do not want it to be blocked during do_idle() bug 1529160 Change-Id: I87114b5367eaa217376455a2699c0d21c451c889 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/434190 (cherry picked from commit 561dc8e0933ff2d72573292968b893a52f5f783a) Reviewed-on: http://git-master/r/435131 Reviewed-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* Revert "gpu: nvgpu: Dump offending push buffer fragment"Arto Merilainen2015-03-18
| | | | | | | | | | | | | | | | | | | Channel and gpfifo allocations are entirely separated from each other, however, the code here assumes that active channel means that the channel also has a gpfifo. This reverts commit a24602f094380539788696d1b1567a4f4d914b17 which added gpfifo dump. Changing debug dumping to be safe requires refactoring the channel release code to use proper locking. Bug 1530226 Change-Id: I2fb02542a17dd56a0a9ce732b327e34b85ade8b9 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/434038 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu:nvgpu: Enable Falcon trace printsVijayakumar2015-03-18
| | | | | | | | | | | Dump Falcon trace on PMU Crash and add debugfs node falc_trace. This needs Debug trace to be enabled in GPmu binary. Change-Id: I093ef196202958e46d8d9636a848bd6733b5e4cc Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/432732 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Update generic platformArto Merilainen2015-03-18
| | | | | | | | | | This patch adds .is_railgated() callback for the generic gpu platform. Change-Id: Ief13a6fba82b376aafbe861e8f3823a19bb7f679 Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/433059 Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: Support probing host1x link from appsArto Merilainen2015-03-18
| | | | | | | | | | | This patch adds support to check if the host1x link exists and is supported using the gpu characteristics ioctl. Bug 1459653 Change-Id: I832eea217ed7f007e341dfde5769887e0882d6bb Signed-off-by: Arto Merilainen <amerilainen@nvidia.com> Reviewed-on: http://git-master/r/433058
* gpu:nvgpu:sysfs node to update aelpg parameterMahantesh Kumbar2015-03-18
| | | | | | | | | | | | | | | | Added sysfs node to update aelpg parameter. Pass parameter as below sequence, SAMPLING_PERIOD_PG_DEFAULT_US, MINIMUM_IDLE_FILTER_DEFAULT_US, MINIMUM_TARGET_SAVING_DEFAULT_US, POWER_BREAKEVEN_DEFAULT_US, CYCLES_PER_SAMPLE_MAX_DEFAULT Bug 1464737 Change-Id: I46873c463820f30f190c722d7ed038622cb2710f Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com> Reviewed-on: http://git-master/r/422702 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Remove GPU MMIO access on power/rail gateAlex Waterman2015-03-18
| | | | | | | | | | | | This is to weed out accesses to the GPU while it is gated. Otherwise the accesses are silently dropped or cause a HW hang (on older chips). Bug 1514949 Change-Id: Ice4cdb9f1f736978ebb3db847f39c7439bf98134 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/416339 Reviewed-by: Mitch Luban <mluban@nvidia.com>
* gpu: nvgpu: Wait for idle via FIFO registersTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | Wait for engine idle via FIFO's engine status instead of submitting WFI to channel. Submitting WFI and waiting is not robust, and wait might invoke debug dump which cannot be done while powering down. Bug 1499214 Change-Id: I4d52e8558e1a862ad4292036594d81ebfbd5f36b Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/432151 Reviewed-by: Sachin Nikam <snikam@nvidia.com> Tested-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Bump unmap retries if not siliconTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | | In simulation and emulation 50ms is not enough to ensure a job is complete. Bump it to 5s when not running on silicon. Bug 1510751 Change-Id: I90883b70ce2a75a8f07344f713d647b3fa0d0c7d Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/432044 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Chris Dragan <kdragan@nvidia.com> Tested-by: Chris Dragan <kdragan@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Clear channel class on openTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | | | | Channel class needs to be cleared when a channel is opened. Otherwise previously used channel remains, and we can accidentally use KEPLER_C methods even if KEPLER_C is not allocated. Bug 1487928 Bug 200000669 Change-Id: I3e1ae8d5edbdd82fa569b38a89a89dedb69ee773 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/428866 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: Free allocated gr_ctxTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | gr_ctx is nowadays kalloc()'d separately. Adding kfree() to prevent memory leak. Bug 1528275 Change-Id: I942812a483adad47e82bc75a7bda5942c30c527a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/428890 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Shridhar Rasal <srasal@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu: nvgpu: fix possible PMU isr raceDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Possible race description : - while PMU is booting, it sends messages to kernel which we process in gk20a_pmu_isr() - but when messages are processed it is possible that we are on the way to rail gate the GPU and we have already called pmu_destroy() - this could lead to hangs if while processing messages, GR is already off To fix this, introduce another mutex isr_enable_lock and a flag to turn on/off ISRs - when we enable PMU, get the lock and set the flag - in pmu_destroy(), get the lock and remove the flag - in pmu_isr(), take the lock, check if flag is set or not. If flag is not set return, otherwise proceed with the messages Bug 200014542 Bug 200014887 Change-Id: I0204d8a00e4563859eebc807d4ac7d26161316ea Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/428371 (cherry picked from commit 9a37528314f2a2504e4530719f817a93db9a5bf0) Reviewed-on: http://git-master/r/428352 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sachin Nikam <snikam@nvidia.com>
* gpu:nvgpu:fix powergate disabling orderVijayakumar2015-03-18
| | | | | | | | | | | | | | | | | ELPG has to disabled before we write to clock gating registers If ELPG is engaged during clock gating register write it will cause error in ELPG engine Bug 200013495 Bug 200014542 Change-Id: I57d1c59fc9311686829d898faddc90149df4cb46 Signed-off-by: Vijayakumar <vsubbu@nvidia.com> Reviewed-on: http://git-master/r/432117 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Supriya Sharatkumar <ssharatkumar@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Mitch Luban <mluban@nvidia.com>
* gpu: nvgpu: Initialize PMU ucode only onceTerje Bergstrom2015-03-18
| | | | | | | | | | | Initialize PMU ucode only once, and skip on next GPU boot. Bug 1528275 Change-Id: Ifb95edb380518fae48fdc3b90b00b450fe30c439 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/428897 Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
* gpu: nvgpu: Boot FECS to secure modeTerje Bergstrom2015-03-18
| | | | | | | | | | Boot FECS to secure mode if ACR is enabled. Bug 200006956 Change-Id: Ifc107704a6456af837b7f6c513c04d152b2f4d3a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424251
* gu: nvgpu: Add PMU state ELPG bootingTerje Bergstrom2015-03-18
| | | | | | | | | | | Add PMU state ELPG booting. Prevent ISR processing when PMU is in OFF state. Bug 200006956 Change-Id: Ibcf69a2d81965cc87f520bf864c4425681f04531 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424769
* gpu: nvgpu: Separate PMU firmware load from initTerje Bergstrom2015-03-18
| | | | | | | | | | | | | Separate the code to load PMU firmware from the software init. This allows folding ACR and non-ACR PMU software initialization sequences. Bug 200006956 Change-Id: I74b289747852167e8ebf1be63036c790ae634da4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424768 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: do not abort probe if secure page alloc failsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Do not abort GPU probe if secure page alloc fails. We can just note that this allocation failed (using bool secure_alloc_ready) and prevent further secure memory allocation if this flag is not set. Bug 1525465 Change-Id: Ie4eb6393951690174013d2de3db507876d7b657f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/427730 GVS: Gerrit_Virtual_Submit Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove extra g field in pmu_gk20aTerje Bergstrom2015-03-18
| | | | | | | | | | | pmu_gk20a has a pointer to struct gk20a *. As pmu_gk20a is part of gk20a, there's no need to have the circular dependency. Bug 200006956 Change-Id: I6d5d10a93b2fba4a26a1e28b3c5206506dc6cc04 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424767
* gpu: nvgpu: add TSG support for engine contextDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All channels in a TSG need to share same engine context i.e. pointer in RAMFC of all channels in a TSG must point to same NV_RAMIN_GR_WFI_TARGET To get this, add a pointer to gr_ctx inside TSG struct so that TSG can maintain its own unique gr_ctx Also, change the type of gr_ctx in a channel to pointer variable so that if channel is part of TSG it can point to TSG's gr_ctx otherwise it will point to its own gr_ctx In gk20a_alloc_obj_ctx(), allocate gr_ctx as below : 1) If channel is not part of any TSG - allocate its own gr_ctx buffer if it is already not allocated 2) If channel is part of TSG - Check if TSG has already allocated gr_ctx (as part of TSG) - If yes, channel's gr_ctx will point to that of TSG's - If not, then it means channels is first to be bounded to this TSG - And in this case we will allocate new gr_ctx on TSG first and then make channel's gr_ctx to point to this gr_ctx Also, gr_ctx will be released as below ; 1) If channels is not part of TSG, then it will be released when channels is closed 2) Otherwise, it will be released when TSG itself is closed Bug 1470692 Change-Id: Id347217d5b462e0e972cd3d79d17795b37034a50 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/417065 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add TSG support to runlistsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | - when a TSG channel is made runnable, add it to TSG's runnable list - when a TSG channel is removed from runlist, remove it from TSG's runnable list When we rewrite the entire runlist : - first add all the channels which are not part of any TSG - then find all active TSGs, add an entry in runlist for the TSG (with TSG id and length of TSG) - then write entries for each channel in that TSG Bug 1470692 Change-Id: Ic55a4d5959abc72cd20b8224eb4c31d3ff411861 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416612 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add accessors for runlist ram entryDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add accessors to modify contents of runlist ram (RAMRL) entry. Using these accessors we can modify a runlist entry to specify it as regular channel or TSG entry Bug 1470692 Change-Id: If39759941ecb07af11152dbddb6fb5a67c14b26e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416611 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Randy Spurlock <rspurlock@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add kernel APIs for TSG supportDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add support to create/destroy TSGs using node "/dev/nvhost-tsg-gpu" Provide below IOCTLs to bind/unbind channels to/from TSGs : NVGPU_TSG_IOCTL_BIND_CHANNEL NVGPU_TSG_IOCTL_UNBIND_CHANNEL Bug 1470692 Change-Id: Iaf9f16a522379eb943906624548f8d28fc6d4486 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416610
* gpu: nvgpu: make pm config as platform dataSeshendra Gadagottu2015-03-18
| | | | | | | | | | | | | | | | Make gpu power management feature configurations as platform data. Keep existing sttaus for gk20a and disable all power features for gm20b. Bug 1523728 Change-Id: Ife7786863f18e21b882ac77085c7abc7c84d4cfc Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/426369 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Supriya Sharatkumar <ssharatkumar@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add explicit wmb() before reg writeDeepak Nibade2015-03-18
| | | | | | | | | | | | | | Add explict memory barrier wmb() before writing to register values. Also call writel_relaxed() instead of writel() to skip internal wmb() call which is conditional on some configs. Bug 200012037 Change-Id: I9c545138314b6e73fec2a4aff2b1956444fac806 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/421463 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>