summaryrefslogtreecommitdiffstats
path: root/drivers
Commit message (Collapse)AuthorAge
* gu: nvgpu: Add PMU state ELPG bootingTerje Bergstrom2015-03-18
| | | | | | | | | | | Add PMU state ELPG booting. Prevent ISR processing when PMU is in OFF state. Bug 200006956 Change-Id: Ibcf69a2d81965cc87f520bf864c4425681f04531 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424769
* gpu: nvgpu: Separate PMU firmware load from initTerje Bergstrom2015-03-18
| | | | | | | | | | | | | Separate the code to load PMU firmware from the software init. This allows folding ACR and non-ACR PMU software initialization sequences. Bug 200006956 Change-Id: I74b289747852167e8ebf1be63036c790ae634da4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424768 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: do not abort probe if secure page alloc failsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Do not abort GPU probe if secure page alloc fails. We can just note that this allocation failed (using bool secure_alloc_ready) and prevent further secure memory allocation if this flag is not set. Bug 1525465 Change-Id: Ie4eb6393951690174013d2de3db507876d7b657f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/427730 GVS: Gerrit_Virtual_Submit Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add timeout to L2 flushAlex Waterman2015-03-18
| | | | | | | | | | | | Add a timeout mechanism to the L2 flushing code for gm20b. Previously the code could spin forever in a loop if some issue were to occur with the L2 causing the flush to fail. Change-Id: I742c7671bac92aeb8e9674c43d30c45b2de4a836 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/423842 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove extra g field in pmu_gk20aTerje Bergstrom2015-03-18
| | | | | | | | | | | pmu_gk20a has a pointer to struct gk20a *. As pmu_gk20a is part of gk20a, there's no need to have the circular dependency. Bug 200006956 Change-Id: I6d5d10a93b2fba4a26a1e28b3c5206506dc6cc04 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/424767
* gpu: nvgpu: add TSG support for engine contextDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All channels in a TSG need to share same engine context i.e. pointer in RAMFC of all channels in a TSG must point to same NV_RAMIN_GR_WFI_TARGET To get this, add a pointer to gr_ctx inside TSG struct so that TSG can maintain its own unique gr_ctx Also, change the type of gr_ctx in a channel to pointer variable so that if channel is part of TSG it can point to TSG's gr_ctx otherwise it will point to its own gr_ctx In gk20a_alloc_obj_ctx(), allocate gr_ctx as below : 1) If channel is not part of any TSG - allocate its own gr_ctx buffer if it is already not allocated 2) If channel is part of TSG - Check if TSG has already allocated gr_ctx (as part of TSG) - If yes, channel's gr_ctx will point to that of TSG's - If not, then it means channels is first to be bounded to this TSG - And in this case we will allocate new gr_ctx on TSG first and then make channel's gr_ctx to point to this gr_ctx Also, gr_ctx will be released as below ; 1) If channels is not part of TSG, then it will be released when channels is closed 2) Otherwise, it will be released when TSG itself is closed Bug 1470692 Change-Id: Id347217d5b462e0e972cd3d79d17795b37034a50 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/417065 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add TSG support to runlistsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | | - when a TSG channel is made runnable, add it to TSG's runnable list - when a TSG channel is removed from runlist, remove it from TSG's runnable list When we rewrite the entire runlist : - first add all the channels which are not part of any TSG - then find all active TSGs, add an entry in runlist for the TSG (with TSG id and length of TSG) - then write entries for each channel in that TSG Bug 1470692 Change-Id: Ic55a4d5959abc72cd20b8224eb4c31d3ff411861 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416612 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add accessors for runlist ram entryDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add accessors to modify contents of runlist ram (RAMRL) entry. Using these accessors we can modify a runlist entry to specify it as regular channel or TSG entry Bug 1470692 Change-Id: If39759941ecb07af11152dbddb6fb5a67c14b26e Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416611 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Randy Spurlock <rspurlock@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add kernel APIs for TSG supportDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | Add support to create/destroy TSGs using node "/dev/nvhost-tsg-gpu" Provide below IOCTLs to bind/unbind channels to/from TSGs : NVGPU_TSG_IOCTL_BIND_CHANNEL NVGPU_TSG_IOCTL_UNBIND_CHANNEL Bug 1470692 Change-Id: Iaf9f16a522379eb943906624548f8d28fc6d4486 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/416610
* gpu: nvgpu: make pm config as platform dataSeshendra Gadagottu2015-03-18
| | | | | | | | | | | | | | | | Make gpu power management feature configurations as platform data. Keep existing sttaus for gk20a and disable all power features for gm20b. Bug 1523728 Change-Id: Ife7786863f18e21b882ac77085c7abc7c84d4cfc Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/426369 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Supriya Sharatkumar <ssharatkumar@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add explicit wmb() before reg writeDeepak Nibade2015-03-18
| | | | | | | | | | | | | | Add explict memory barrier wmb() before writing to register values. Also call writel_relaxed() instead of writel() to skip internal wmb() call which is conditional on some configs. Bug 200012037 Change-Id: I9c545138314b6e73fec2a4aff2b1956444fac806 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/421463 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
* nvgpu: Host side changes to support HS modeSupriya2015-03-18
| | | | | | | | | | | | | | | | | | | GM20B changes in PMU boot sequence to support booting in HS mode and LS mode Bug 1509680 Change-Id: I2832eda0efe17dd5e3a8f11dd06e7d4da267be70 Signed-off-by: Supriya <ssharatkumar@nvidia.com> Reviewed-on: http://git-master/r/423140 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Dump offending push buffer fragmentTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | When outputting debug dump, print the contents of current push buffer segment. Also changes the debug dump to use pr_cont when applicable, and dumps state before recovering in case channel was not loaded to an engine. Bug 1498688 Change-Id: I5ca12f64bae8f12333d82350278c700645d5007e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/422198
* gpu: nvgpu: do not idle timed out channelsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | While suspending the device, do not submit WFI on timed out channels Submitting WFI on timed out channels will cuase submit_wfi() to return error and as result of this, rail gating of device will be prevented Bug 200010416 Change-Id: Ic097bfdae59dbf9e1f2aea5d8d0431b5f1c3721b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/422743 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: bail out from poweroff if channel suspend failsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | | During gk20a_pm_prepare_poweroff(), if call to gk20a_channel_suspend() fails, we proceed to disable other components and then return error. But when genpd sees the error, it will abort the suspend sequence and keep the device state as active. But since we have already disabled all the components, GPU lands in invalid state. Hence, if channel_suspend() fails then do not proceed but return the error immediately Bug 200010416 Change-Id: I553a2a25832a1be4941bb6b6ce490c950cdbe7fa Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/422248 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: support non-secure bootSeshendra Gadagottu2015-03-18
| | | | | | | | | | | | | | | | For non-secure FALCON boot support, by-pass MMU check. Bug 1524197 Change-Id: I735c10a1ea50357c1ea2d5514c73477e76db7e77 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/424005 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Supriya Sharatkumar <ssharatkumar@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove unused vpr refetch functionsDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | VPR resize is done by forcing GPU to idle and then updating VPR size from TLK. There is no need now to call vpr_resize funtion from kernel and hence these functions can be removed. Bug 1487804 Change-Id: I758a6e0a99a58757866f1138b0a89594e2a33908 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/421703 (cherry picked from commit 391d9bacf053fe0dacffc76c36768f82912ad1f4) Reviewed-on: http://git-master/r/419612 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: allocate secure buffer in probeDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | | Allocate dummy secure buffer of size PAGE_SIZE during gk20a_probe(). This will also help to initiate first secure memory (VPR) resize call while GPU is rail gated and in reset. This dummy buffer is released after we allocate some more secure memory buffers in alloc_global_ctx_buffers() Bug 1487804 Change-Id: I61604d9e5ffb585801ee893435c98a0d3e69d666 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/421701 (cherry picked from commit 4236ab3323ee3c02fac562740d8b80d763589dea) Reviewed-on: http://git-master/r/419610 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add APIs to allocate/free dummy secure bufferDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | Add APIs to allocate and free dummy secure buffer of size PAGE_SIZE. Also, fix small errors during secure memory alloc/free. Bug 1487804 Change-Id: If078116fb973e81bfcee054b900c09a313e389c6 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/421700 (cherry picked from commit 5391515dab27cc88b921cf81913085dea98197e0) Reviewed-on: http://git-master/r/419609 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Implement L2 flush in fifo recoveryAlex Waterman2015-03-18
| | | | | | | | | | | | | Implement a full L2 flush (clean and invalidate) for gm20b in the fifo recovery path. Bug 1512176 Change-Id: Ibf89ede9cca65a6868ebff89825869053302a007 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/416435 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add cache management registersAlex Waterman2015-03-18
| | | | | | | | | | | | | Add the necessary cache management registers for doing a full L2 flush in GM20b. Bug 1512176 Change-Id: I7799e5e584238a0af02abbf4f49917d7590d97dc Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/417260 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix compilation issues with PM disableSeshendra Gadagottu2015-03-18
| | | | | | | | | | | Fix gpu driver compilation issues with power mangement and runtime power management disable. Change-Id: I8e1873871d6f184013b2142dd0cbc32c67774177 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/417925 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Support semaphore sync when aborting jobsLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | When aborting jobs on channel error situations, we manually set the channel syncpoint's min == max in gk20a_disable_channel_no_update. Nvhost will notice this manual syncpoint increment, and will call back to gk20a_channel_update, which will clean up the job. With semaphore synchronization, we don't have anybody calling back to gk20a_channel_update, so we need to call it ourselves. Release job semaphores (the equivalent of set_min_eq_max) on gk20a_disable_channel_no_update, and if any semaphores were released, call gk20a_channel_update afterwards. Because we are actually calling gk20a_channel_update in some situations, gk20a_disable_channel_no_update is no longer an appropriate name for the function. Rename it to gk20a_channel_abort. Bug 1450122 Change-Id: I1267b099a5778041cbc8e91b7184844812145b93 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422161 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Improve locking in semaphore_gk20a.cLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | Fix some possible race conditions when manipulating the mapping list of semaphore pools. Acquire a reference to the vm in gk20a_semaphore_pool_map, and release that reference in gk20a_semaphore_pool_unmap. Bug 1450122 Change-Id: I204e9c3dffd5162538b93e628d016dc06b3a5fb6 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422160 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Destroy channel sync before releasing vmLauri Peltonen2015-03-18
| | | | | | | | | | | | | The semaphore backend of gk20a_channel_sync uses the channel vm. We must destroy the channel sync before freeing the channel vm. Bug 1450122 Change-Id: I578567b7500672534d53facc58643df49df8b305 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/422159 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Update channel on semaphore releaseLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | When using semaphore based channel synchronization, a semaphore release may mean that a job has completed. Call gk20a_channel_update from gk20a_channel_semaphore_wakeup to check if there are memory refs to release or sync timelines to signal. Bug 1450122 Change-Id: Ib829c895dab05676c35f974d3f1c3d88c047c9b9 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/394576 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Add semaphore based gk20a_channel_syncLauri Peltonen2015-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add semaphore implementation of the gk20a_channel_sync interface. Each channel has one semaphore pool, which is mapped as read-write to the channel vm. We allocate one or two semaphores from the pool for each submit. The first semaphore is only needed if we need to wait for an opaque sync fd. In that case, we allocate the semaphore, and ask GPU to wait for it's value to become 1 (semaphore acquire method). We also queue a kernel work that waits on the fence fd, and subsequently releases the semaphore (sets its value to 1) so that the command buffer can proceed. The second semaphore is used on every submit, and is used for work completion tracking. The GPU sets its value to 1 when the command buffer has been processed. The channel jobs need to hold references to both semaphores so that their backing semaphore pool slots are not reused while the job is in flight. Therefore gk20a_channel_fence will keep a reference to the semaphore that it represents (channel fences are stored in the job structure). This means that we must diligently close and dup the gk20a_channel_fence objects to avoid leaking semaphores. Bug 1450122 Bug 1445450 Change-Id: Ib61091a1b7632fa36efe0289011040ef7c4ae8f8 Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com> Reviewed-on: http://git-master/r/374844 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Turn on scaling when poweredAllen Yu2015-03-18
| | | | | | | | | | | | | | This patch reorders scaling resume to happen always when we power on the GPU, so as to balance the scaling suspend when we power off GPU. bug 200010911 Change-Id: I9fde817fbf9fed7d90c48ea06050db4b82e670a8 Signed-off-by: Allen Yu <alleny@nvidia.com> Reviewed-on: http://git-master/r/421541 GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
* gpu: nvgpu: Do not warn about unknown ctxsw regionTerje Bergstrom2015-03-18
| | | | | | | | | | Do not warn about unknown regions in ctxsw firmware blob. Bug 1435870 Change-Id: I343d85a09a3cd1d7c1c881836af6868296409f07 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/420670
* gpu: nvgpu: Add rail gating trace eventsTerje Bergstrom2015-03-18
| | | | | | | Change-Id: I661f14b2858fb7bc993157a597d4a278859da837 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/418789 Reviewed-by: Automatic_Commit_Validation_User
* gpu: nvgpu: print intr code for class errorDeepak Nibade2015-03-18
| | | | | | | | | | | | Print interrupt code and channel id for unhandled gr class error. Bug 200010403 Change-Id: Iedceaf4b8b6363b26f1836256875fb9b5c43eded Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/419566 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add accessor for gr_class_errorDeepak Nibade2015-03-18
| | | | | | | | | | | | | | Add accessors to read class error code from NV_PGRAPH_CLASS_ERROR Bug 200010403 Change-Id: Ia99f50e264f9b8aa93f99994e52424418a2e4f74 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/419565 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: fix memory leak of dbg_sessionDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | | In gk20a_dbg_gpu_dev_release() (when we close nvhost-dgb-gpu sysfs), we return from function if there is no channel bound to dbg_session without freeing the dbg_session memory. If there is no channel bound then do not call dbg_unbind_channel_gk20a() and then free dbg_session memory always. Bug 200010382 Change-Id: I90dd2ed3cd72fbc5d429799660daf2a09b974fda Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/419306 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Rewrite PMU boot-up sequenceTerje Bergstrom2015-03-18
| | | | | | | | | | | | | Rewrite PMU boot sequence as a state machine. At PMU power-up send initial messages, and reset state machine. At each reply from PMU, do the next stage of PMU boot and set state. As now PMU and FECS boot are independent, we need to ensure engine idle before saving ZBC. Change-Id: I1ea747ab794ef08f1784eeabfdae7655d585ff21 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/410205
* gpu: nvgpu: Set ch error before channel disableTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | | | | In error case we first disabled the channel, and reset sync point to max. After this we set channel error state. This causes a race if channel is closed between setting sync point and setting channel state. Rearrange the code so that error state is set first, and only then channel is disabled. Bug 1519646 Change-Id: I20550f6a2708f892b6ba4ee714e90bdecdd128ad Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/418948 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
* gpu: nvgpu: Reload ZBC values on rail gate exitTerje Bergstrom2015-03-18
| | | | | | | | | | | When exiting rail gate, we reloaded default ZBC values. The correct behavior is to reload the values. Bug 1447255 Change-Id: I7aad3586dda91a91a3629062a27001af281b955e Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/418346
* gpu: nvgpu: On FECS error, dump ARB statusTerje Bergstrom2015-03-18
| | | | | | | | On FECS arbiter timeout, dump ARB status. Change-Id: I4f8c4d38c99e35ce751172a8695e950f0ce594c8 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/417753
* gpu: nvgpu: update gpmu supported versionsSeshendra Gadagottu2015-03-18
| | | | | | | | | | | | | Updated gmpu ucode versions supported for gm20b. Bug 1514021 Change-Id: If9cbde60449f5cc2b9c39c36ab5c79985d320bf8 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/418479 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: gm20b: fix compression sharingKevin Huang2015-03-18
| | | | | | | | | | | | | | | | For GM20B alone, the LTC count is already accounted for the HW logic for the CBC base calculation from the postDivide address. So SW doesn't have to explicity divide it by the LTC count in the postDivide address calculation. Bug 1477079 Change-Id: I558bbe66bbcfb7edfa21210d0dc22c6170149260 Signed-off-by: Kevin Huang <kevinh@nvidia.com> Reviewed-on: http://git-master/r/414264 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Fault engines on PBDMA errorTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | | | On PBDMA error even though the engine might not be wedged, we need to kick the channel out of engine. Add that logic. Also when channel is not in engine, we need to remove it from runlist. Bug 1498688 Change-Id: I5939feb41d0a90635ba313b265c7e3b5d3f48622 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/417682 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: Kevin Huang (Eng-SW) <kevinh@nvidia.com> Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
* gpu: nvgpu: Remove extraneous FB flush callsTerje Bergstrom2015-03-18
| | | | | | | | | | | | | | gk20a_mm_fb_flush() invoked G_ELPG_FLUSH and FB_FLUSH. Remove the invokation of G_ELPG_FLUSH. Replace calls to gk20a_mm_fb_flush() with gk20a_mm_l2_flush() when appropriate. Bug 1421824 Change-Id: I02af4bdc3b7bd26d0f6a8d610f70349269775a36 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/408210 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
* gpu: nvgpu: select NETA for gm20bSeshendra Gadagottu2015-03-18
| | | | | | | | | | Bug 1514021 Change-Id: I5bf942245a42881a418eb9e18c148287b6901ca0 Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: http://git-master/r/415531 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bo Yan <byan@nvidia.com>
* gpu: nvgpu: Create trace events lastDan Willemsen2015-03-18
| | | | | | | Otherwise other trace event headers included may also be created, which leads to duplicate definition issues in the 3.14 kernel. Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
* gpu: nvgpu: add support to Maxwell sparse textureKevin Huang2015-03-18
| | | | | | | Bug 1442531 Change-Id: Ie927cca905b2ea9811417e7a1fdfdf9d48f015e2 Signed-off-by: Kevin Huang <kevinh@nvidia.com>
* gpu: nvgpu: add generic api for sparse memoryKevin Huang2015-03-18
| | | | | | | Bug 1442531 Change-Id: I97408b54e27f5ed6411792e73f079a6f86cbe5f6 Signed-off-by: Kevin Huang <kevinh@nvidia.com>
* gpu: nvgpu: implement mapping for sparse allocationKirill Artamonov2015-03-18
| | | | | | | | | | | | | | | | Implement support for partial buffer mappings. Whitelist gr_pri_bes_crop_hww_esr accessed by fec during sparse texture initialization. bug 1456562 bug 1369014 bug 1361532 Change-Id: Ib0d1ec6438257ac14b40c8466b37856b67e7e34d Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Reviewed-on: http://git-master/r/375012 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Handle PBDMA errorsTerje Bergstrom2015-03-18
| | | | | | | | | Add handling for PBDMA errors. Bug 1498688 Change-Id: Iff391110db1c270c05c76e6a14b7c666da8e3751 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add is_railgated() callbackDeepak Nibade2015-03-18
| | | | | | | | | | | Add is_railgated() platform callback to check status of gk20a power rail Bug 1376916 Bug 1487804 Change-Id: Ia0d909210dc409ab684eb6f20528b81500aecd5c Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: sysfs to put gpu into idleDeepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | - Add a sysfs "force_idle" to forcibly idle the GPU - read on this sysfs will return the current status 0 : not in idle (running) 1 : in forced idle state "echo 1 > force_idle" will force the gpu into idle "echo 0 > force_idle" will cause the gpu to resume Bug 1376916 Bug 1487804 Change-Id: I48dfd52e0d14561220bc4baea0776d1bdfaa7ea5 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
* gpu: nvgpu: add railgate check in do_idle()Deepak Nibade2015-03-18
| | | | | | | | | | | | | | | | | In gk20a_do_idle(), check gk20a rail status before returning. If rail is off, then only return success otherwise return failure Bug 1376916 Bug 1487804 Change-Id: I6280bf06c686b8baa4d6f49e90f47148411c3e02 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/415281 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Shridhar Rasal <srasal@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>