summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/vgpu
Commit message (Collapse)AuthorAge
...
* gpu: nvgpu: use nvgpu list for profiler objectsDeepak Nibade2017-04-10
| | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs to store profiler objects Jira NVGPU-13 Change-Id: I2a2715b3a86c6e526bbdbb040c283a3ddd7b24ba Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454691 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: merge tegra_vgpu_t18x.h to tegra_vgpu.hRichard Zhao2017-04-07
| | | | | | | | | | | | No need to keep two vgpu headers anymore. Jira VFND-3796 Change-Id: I400cbfa5b2c0e62963eff247adcd9483be975379 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1457480 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: init vars in gk20a vgpu missedRichard Zhao2017-04-07
| | | | | | | | | | | | | | | | | | | | | pending_sema_waits and a few other variables in struct gk20a is used in gk20a shutdown path but we didn't initialize them in vgpu. I added a function vgpu_init_vars dedicated to init variables for struct gk20a. The native code also similar function nvgpu_init_vars(). This is a quick fix. Finally, the common probe code is better be put in common function btween vgpu and native gpu. Bug 200293437 Jira EVLR-1152 Change-Id: I55f0d179d7adba556e0cb404766e14405b3e27e5 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1330229 (cherry picked from commit 7691902fec8abdd621ee17561607efeef615499f) Reviewed-on: http://git-master/r/1331604 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Rename nvgpu DMA APIsAlex Waterman2017-04-06
| | | | | | | | | | | | | | Rename the nvgpu DMA APIs from gk20a_gmmu_alloc* to nvgpu_dma_alloc*. This better reflects the purpose of the APIs (to allocate DMA suitable memory) and avoids confusion with GMMU related code. JIRA NVGPU-12 Change-Id: I673d607db56dd6e44f02008dc7b5293209ef67bf Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325548 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move DMA API to dma.hAlex Waterman2017-04-06
| | | | | | | | | | | | | | | Make an nvgpu DMA API include file so that the intricacies of the Linux DMA API can be hidden from the calling code. Also document the nvgpu DMA API. JIRA NVGPU-12 Change-Id: I7578e4c726ad46344b7921179d95861858e9a27e Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1323326 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: rename mem_desc to nvgpu_memAlex Waterman2017-04-06
| | | | | | | | | | | | | | | | | Renaming was done with the following command: $ find -type f | \ xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g' Also rename mem_desc.[ch] to nvgpu_mem.[ch]. JIRA NVGPU-12 Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1325547 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rename gk20a_mem_* functionsAlex Waterman2017-04-06
| | | | | | | | | | | | Rename the functions used for mem_desc access to nvgpu_mem_*. JIRA NVGPU-12 Change-Id: Ibfdc1112d43f0a125e4487c250e3f977ffd2cd75 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1323325 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu rbtree to store mapped buffersDeepak Nibade2017-04-06
| | | | | | | | | | | | | | | | | | Use nvgpu rbtree instead of linux rbtree to store mapped buffers for each VM Move to use "struct nvgpu_rbtree_node" instead of "struct rb_node" And similarly use rbtree APIs from <nvgpu/rbtree.h> instead of linux APIs Jira NVGPU-13 Change-Id: Id96ba76e20fa9ecad016cd5d5a6a7d40579a70f2 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1453043 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove last Linux kmem usageAlex Waterman2017-04-04
| | | | | | | | | | | | | | | | | | Replace the last of the Linux kmem API usage with nvgpu kmem calls instead. Several places are left alone - allocating the struct gk20a in particular. Also one function was updated in the clk code to take a struct gk20a as an argument so that it could use nvgpu_kmalloc(). Bug 1799159 Bug 1823380 Change-Id: I84fc3f8e19c63d6265bac6098dc727d93e3ff613 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1331702 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use nvgpu list for VA listsDeepak Nibade2017-04-03
| | | | | | | | | | | | | | Use nvgpu list APIs instead of linux list APIs for reserved VA list and buffer VA list Jira NVGPU-13 Change-Id: I83c02345d54bca03b00270563567227510cfce6b Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1454013 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: use new List APIs to free channelsDeepak Nibade2017-03-31
| | | | | | | | | | | | | | | | | Use new APIs from <nvgpu/list.h> to access free channel list Define channel_gk20a_from_free_chs() to convert a list node to struct channel_gk20a Jira NVGPU-13 Change-Id: Idaf58f04be1c7fc553bea7c8de45951bf82bb340 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1303025 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move programming of host registers to fifoTerje Bergstrom2017-03-28
| | | | | | | | | | | | | | | | | | | Move code that touches host registers and instance block to fifo HAL. This involves adding HAL ops for the fifo HAL functions that get called from outside fifo. This clears responsibility of channel by leaving it only managing channels in software and push buffers. channel had member ramfc defined, but it was not used, to remove it. pbdma_acquire_val consisted both of channel logic and hardware programming. The channel logic was moved to the caller and only hardware programming was moved. Change-Id: Id005787f6cc91276b767e8e86325caf966913de9 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1322423 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use new kmem API functions (vgpu/*)Alex Waterman2017-03-28
| | | | | | | | | | | | | | | Use the new kmem API functions in vgpu/*. Also reshuffle the order of some allocs in the vgpu init code to allow usage of the nvgpu kmem APIs. Bug 1799159 Bug 1823380 Change-Id: I6c6dcff03b406a260dffbf89a59b368d31a4cb2c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1318318 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: refactor teardown to support unbindDavid Nieto2017-03-25
| | | | | | | | | | | | | | | | | | | | | | | | This change refactors the teardown in remove to ensure that it is possible to unload the driver while leaving fds open. This is achieved by making sure that the SW state is kept alive till all fds are closed and by checking that subsequent calls to ioctls after the teardown fail. Normally, this would be achieved ny calls into gk20a_busy(), but in kickoff we dont call into that to reduce latency, so we need to check the driver status directly, and also in some of the functions as we need to make sure the ioctl does not dereference the device or platform struct bug 200277762 JIRA: EVLR-1023 Change-Id: I163e47a08c29d4d5b3ab79f0eb531ef234f40bde Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1320219 Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
* gpu: nvgpu: vgpu: profiler reservation supportPeter Daifuku2017-03-24
| | | | | | | | | | | | | | | | | | | | | | | Support for hwpm reservations in the virtual case: - Add session ops for checking and setting global and context reservations, and releasing reservations - in the native case, these just update reservation counts and flags - in the vgpu case, when the reservation count is 0, check with the RM server that a reservation is possible: for global reservations, no other guest can have a reservation; for context reservations, no other guest can have a global reservation - in the vgpu case, when the reservation count is decremented to 0, notify the RM server that the guest no longer has any reservations Bug 1775465 JIRA VFND-3428 Change-Id: Idf115b730e465e35d0745c96a8f8ab6b645c7cae Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1323375 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add bus HALTerje Bergstrom2017-03-23
| | | | | | | | | | | Add bus HAL and move all bus related hardware sequencing to that file: BAR1 binding, timer access, and interrupt handling. Change-Id: Ibc5f5797dc338de10749b446a7bdbcae600fecb4 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1323353 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: profiler create/free, hwpm reservePeter Daifuku2017-03-22
| | | | | | | | | | | | | | Add support for creating/freeing profiler objects, hwpm reservations Bug 1775465 JIRA EVLR-680 JIRA EVLR-682 Change-Id: I4db83d00e4b0b552b05b9aae96dc553dd1257d88 Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com> Reviewed-on: http://git-master/r/1294401 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add refcounting to driver fdsDavid Nieto2017-03-20
| | | | | | | | | | | | | | | | | The main driver structure is not refcounted properly, so when the driver unload, file desciptors associated to the driver are kept open with dangling references to the main object. This change adds referencing to the gk20a structure. bug 200277762 JIRA: EVLR-1023 Change-Id: Id892e9e1677a344789e99bf649088c076f0bf8de Signed-off-by: David Nieto <dmartineznie@nvidia.com> Reviewed-on: http://git-master/r/1317420 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move all FB programming to FB HALTerje Bergstrom2017-03-17
| | | | | | | | | | | | | | | | | | | | | Move all programming of FB to fb_*.c files, and remove the inclusion of FB hardware headers from other files. TLB invalidate function took previously a pointer to VM, but the new API takes only a PDB mem_desc, because FB does not need to know about higher level VM. GPC MMU is programmed from the same function as FB MMU, so added dependency to GR hardware header to FB. GP106 ACR was also triggering a VPR fetch, but that's not applicable to dGPU, so removed that call. Change-Id: I4eb69377ac3745da205907626cf60948b7c5392a Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1321516 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: force gpu preepmtion policyAparna Das2017-03-14
| | | | | | | | | | | | | | | | | | Query the RM server to retrieve gpu preemption policy of guest based on pct configuration. If guest is not allowed to request wfi preemption mode then set context with either gfxp or cta preemption mode only. Jira VFND-3079 Jira VFND-3081 Change-Id: I60cbf121d6f0e2373568cf40b3dfdb4df76fe02d Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1280903 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Sachit Kadle <skadle@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add clear single SM error stateThomas Fleury2017-03-09
| | | | | | | | | | | | | | | | | | | | | | Add support for clearing single SM error state for CUDA debugger. In addition to clearing local copy of SM error state, vgpu_gr_clear_sm_error_state now sends a command to RM server (TEGRA_VGPU_CMD_CLEAR_SM_ERROR_STATE), to clear global ESR and warp ESR. Bug 1791111 Change-Id: I3a1f0644787fd900ec59a0e7974037d46a603487 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1296311 (cherry picked from commit fd07e03c3d086f396e4d65575c576a4dd68c920a) Reviewed-on: http://git-master/r/1299060 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Cory Perry <cperry@nvidia.com> Tested-by: Cory Perry <cperry@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: suspend/resume contextsThomas Fleury2017-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | Add ability to suspend/resume contexts for a debug session (NVGPU_DBG_GPU_IOCTL_SUSPEND_RESUME_CONTEXTS), in virtualized case: - added hal function to resume contexts. - added vgpu support for suspend contexts, i.e. build a list of channel ids, and send TEGRA_VGPU_CMD_SUSPEND_CONTEXTS - added vgpu support for resume contexts, i.e. build a list of channel ids, and send TEGRA_VGPU_CMD_RESUME_CONTEXTS Bug 1791111 Change-Id: Icc1c00d94a94dab6384ac263fb811c00fa4b07bf Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1294761 (cherry picked from commit d17a38eda312ffa92ce92e5bafc30727a8b76c4e) Reviewed-on: http://git-master/r/1299059 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Cory Perry <cperry@nvidia.com> Tested-by: Cory Perry <cperry@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: add NVGPU_GPU_FLAGS_SUPPORT_MAP_COMPBITSRichard Zhao2017-03-08
| | | | | | | | | | | | | | native gpu driver supports map compbits but vgpu does not. Bug 1778448 Bug 200275051 JIRA VFND-3513 Change-Id: I433a6f8631b495875ba899af9609203ab36187ef Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1314065 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove cycle stats from characteristicsRichard Zhao2017-03-08
| | | | | | | | | | | | | | | vgpu does not support cycle stats. Bug 1781434 Bug 200275051 JIRA VFND-3513 Change-Id: Id1d566027913632fc8a4f0285609be5f56b26288 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1314064 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: kmem abstraction and trackingAlex Waterman2017-03-03
| | | | | | | | | | | | | | | | Implement kmem abstraction and tracking in nvgpu. The abstraction helps move nvgpu's core code away from being Linux dependent and allows kmem allocation tracking to be done for Linux and any other OS supported by nvgpu. Bug 1799159 Bug 1823380 Change-Id: Ieaae4ca1bbd1d4db4a1546616ab8b9fc53a4079d Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1283828 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove nvgpu_gpuid_t18x.hAlex Waterman2017-03-02
| | | | | | | | | | | | | | | Remove nvgpu_gpuid_t18x.h since this file is now visible. Migrate the relevant definitions and defines into their expected places and make the code use the real defines. No longer is hiding t18x specific stuff necessary. Bug 1799159 Change-Id: I47fa2392e46fdb7aacc70aeb0cc8c3f5ca0dc22f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1300976 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add and move FECS trace flagvishnu Reddy2017-03-02
| | | | | | | | | | | | | | | | Added fecs trace characteristic flag to vgpu to determine for platform support Also moved fecs flag from ctxsw init to fecs init for native linux. This makes flag init uniform across both platforms. JIRA EVLR-993 Change-Id: Id62f31954cb5d7028ef01a199548f5f908b51eb0 Signed-off-by: Vishnu Reddy Mandalapu <vmandalapu@nvidia.com> Reviewed-on: http://git-master/r/1305045 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add worker for watchdog and job cleanupKonsta Holtta2017-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a worker thread to replace the delayed works in channel watchdog and job cleanups. Watchdog runs by polling the channel states periodically, and job cleanup is performed on channels that are appended on a work queue consumed by the worker thread. Handling both of these two in the same thread makes it impossible for them to cause a deadlock, as has previously happened. The watchdog takes references to channels during checking and possibly recovering channels. Jobs in the cleanup queue have an additional reference taken which is released after the channel is processed. The worker is woken up from periodic sleep when channels are added to the queue. Currently, the queue is only used for job cleanups, but it is extendable for other per-channel works too. The worker can also process other periodic actions dependent on channels. Neither the semantics of timeout handling or of job cleanups are yet significantly changed - this patch only serializes them into one background thread. Each job that needs cleanup is tracked and holds a reference to its channel and a power reference, and timeouts can only be processed on channels that are tracked, so the thread will always be idle if the system is going to be suspended, so there is currently no need to explicitly suspend or stop it. Bug 1848834 Bug 1851689 Bug 1814773 Bug 200270332 Jira NVGPU-21 Change-Id: I355101802f50841ea9bd8042a017f91c931d2dc7 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1297183 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: ignore set preempt mode to defaultThomas Fleury2017-02-24
| | | | | | | | | | | | | | | | | | | In native case, attempting to set graphics/compute preempt mode to default is ignored if preemption mode has already been set for the context. In virtualized case an error is currently returned. Align behaviour for native and virtualized case, by ignoring such request. Bug 200186530 Change-Id: Ieb3a37107bdbd3284804ee9fd392786409224082 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1306506 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: remove use of DEFINE_MUTEX()Deepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | API DEFINE_MUTEX() is defined in Linux and might not be available in other OSs. Hence remove its usage from nvgpu Declare and explicitly initialize below mutexes for both nvgpu and vgpu g->mm.priv_lock g->mm.tlb_lock Jira NVGPU-13 Change-Id: If72885a6da0227a1552303206172f1f2b751471d Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1298042 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use common nvgpu mutex/spinlock APIsDeepak Nibade2017-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using Linux APIs for mutex and spinlocks directly, use new APIs defined in <nvgpu/lock.h> Replace Linux specific mutex/spinlock declaration, init, lock, unlock APIs with new APIs e.g struct mutex is replaced by struct nvgpu_mutex and mutex_lock() is replaced by nvgpu_mutex_acquire() And also include <nvgpu/lock.h> instead of including <linux/mutex.h> and <linux/spinlock.h> Add explicit nvgpu/lock.h includes to below files to fix complilation failures. gk20a/platform_gk20a.h include/nvgpu/allocator.h Jira NVGPU-13 Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/1293187 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add NVGPU_IOCTL_CHANNEL_SET_BOOSTED_CTXPeter Boonstoppel2017-02-14
| | | | | | | | | | | | | | | | This ioctl can be used on gp10b to set a flag in the context header indicating this context should be run at elevated clock frequency. FECS ctxsw ucode will read this flag as part of the context switch and will request higher GPU clock frequencies from BPMP for the duration of the context execution. Bug 1819874 Change-Id: I84bf580923d95585095716d49cea24e58c9440ed Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com> Reviewed-on: http://git-master/r/1292746 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: remove call to invalidate tlbAparna Das2017-02-14
| | | | | | | | | | | | | | Guest doesn't explicitly send command to the RM server to invalidate tlb which is done implicitly when mapping or unmapping buffer. Remove support for this call. Bug 1665111 Change-Id: Icf2edae7feffa35b1dbf87c227b3e98b506e6519 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1287728 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Organize semaphore_gk20a.[ch]Alex Waterman2017-02-13
| | | | | | | | | | | | | | | | | | | | | | | Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore code is common to all chips. Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu and rename it to semaphore.h. Also update all places where the header is inluced to use the new path. This revealed an odd location for the enum gk20a_mem_rw_flag. This should be in the mm headers. As a result many places that did not need anything semaphore related had to include the semaphore header file. Fixing this oddity allowed the semaphore include to be removed from many C files that did not need it. Bug 1799159 Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1284627 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Simplify ref-counting on VMsAlex Waterman2017-02-07
| | | | | | | | | | | | | | | | | | | | Simplify ref-counting on VMs: take a ref when a VM is bound to a channel and drop a ref when a channel is freed. Previously ref-counts were scattered over the driver. Also the CE and CDE code would bind channels with custom rolled code. This was because the gk20a_vm_bind_channel() function took an as_share as the VM argument (the VM was then inferred from that as_share). However, it is trivial to abtract that bit out and allow a central bind channel function that just takes a VM and a channel. Bug 1846718 Change-Id: I156aab259f6c7a2fa338408c6c4a3a464cd44a0c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1261886 Reviewed-by: Richard Zhao <rizhao@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Remove separate fixed address VMAAlex Waterman2017-01-31
| | | | | | | | | | | | | | Remove the special VMA that could be used for allocating fixed addresses. This feature was never used and is not worth maintaining. Bug 1396644 Bug 1729947 Change-Id: I06f92caa01623535516935acc03ce38dbdb0e318 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1265302 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Unify the small and large page address spacesAlex Waterman2017-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic structure of this patch is to make the small page allocator and the large page allocator into pointers (where they used to be just structs). Then assign each of those pointers to the same actual allocator since the buddy allocator has supported mixed page sizes since its inception. For the rest of the driver some changes had to be made in order to actually support mixed pages in a single address space. 1. Unifying the allocation page size determination Since the allocation and map operations happen at distinct times both mapping and allocation of GVA space must agree on page size. This is because the allocation has to separate allocations into separate PDEs to avoid the necessity of supporting mixed PDEs. To this end a function __get_pte_size() was introduced which is used both by the balloc code and the core GPU MM code. It determines page size based only on the length of the mapping/ allocation. 2. Fixed address allocation + page size Similar to regular mappings/GVA allocations fixed address mapping page size determination had to be modified. In the past the address of the mapping determined page size since the address space split was by address (low addresses were small pages, high addresses large pages). Since that is no longer the case the page size field in the reserve memory ioctl is now honored by the mapping code. When, for instance, CUDA makes a memory reservation it specifies small or large pages. When CUDA requests mappings to be made within that address range the page size is then looked up in the reserved memory struct. Fixed address reservations were also modified to now always allocate at a PDE granularity (64M or 128M depending on large page size. This prevents non-fixed allocations from ending up in the same PDE and causing kernel panics or GMMU faults. 3. The rest... The rest of the changes are just by products of the above. Lots of places required minor updates to use a pointer to the GVA allocator struct instead of the struct itself. Lastly, this change is not truly complete. More work remains to be done in order to fully remove the notion that there was such a thing as separate address spaces for different page sizes. Basically after this patch what remains is cleanup and proper documentation. Bug 1396644 Bug 1729947 Change-Id: If51ab396a37ba16c69e434adb47edeef083dce57 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1265300 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: retrieve gpu loadAparna Das2017-01-24
| | | | | | | | | | | | | | | Add support to send command to RM server to retrieve GPU load. Bug 200261903 Change-Id: Ie3d0ba7ec91317e9a2911f71613ad78d20f9c1fb Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1275045 (cherry picked from commit 5a6c1de1e6997bfd803b4b95b3e44e282ba32f67) Reviewed-on: http://git-master/r/1283279 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: use HAL to set TSG timesliceThomas Fleury2017-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | Setting timeslice for virtualized case was not effective, because both ioctls NVGPU_TSG_IOCTL_SET_TIMESLICE and NVGPU_SCHED_IOCTL_TSG_SET_TIMESLICE were calling the native function to set TSG timeslice. - Fixed wrapper function to call HAL - Defined HAL function for "native" set TSG timeslice - Also, properly update timeout_us in TSG context, in virtualized case. This change also moves the min/max bounds checking for tsg timeslice into the native function implementation. There is no sysfs node for these parameters for vgpu, as RM server is ultimately responsible for this check. Bug 200263575 Change-Id: Ibceab9427561ad58ec28abfff0c96ca8f592bdb9 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: http://git-master/r/1283180 Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Move gp10b HW headersAlex Waterman2017-01-11
| | | | | | | | | | | | | | | | | | | | Move the gp10b HW headers to a new directory specially for them: include/nvgpu/hw/gp10b And change the code to include like so: #include <nvgpu/hw/gp10b/hw_fb_gp10b.h> This is part of the process to restructure the nvgpu driver. Bug 1799159 Change-Id: Ic80ea5b7f5c280839e502e2178a345181f7a7ef9 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1280326 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: Start re-organizing the HW headersAlex Waterman2017-01-11
| | | | | | | | | | | | | | | | | | | | | | | Reorganize the HW headers of gk20a. The headers are moved to a new directory: include/nvgpu/hw/gk20a And from the code are included like so: #include <nvgpu/hw/gk20a/hw_pwr_gk20a.h> This is the first step in reorganizing all of the HW headers for gm20b, gm206, etc. This is part of a larger effort to re-structure and make the driver more readable and scalable. Bug 1799159 Change-Id: Ic151155cbc2e6f75009f2d9d597b364a1bed2c4c Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1244790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Move allocators to common/mm/Alex Waterman2017-01-09
| | | | | | | | | | | | | | | | | | | Move the GPU allocators to common/mm/ since the allocators are common code across all GPUs. Also rename the allocator code to move away from gk20a_ prefixed structs and functions. This caused one issue with the nvgpu_alloc() and nvgpu_free() functions. There was a function for allocating either with kmalloc() or vmalloc() depending on the size of the allocation. Those have now been renamed to nvgpu_kalloc() and nvgpu_kfree(). Bug 1799159 Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/1274400 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: receive event TEGRA_VGPU_EVENT_SM_ESRRichard Zhao2017-01-06
| | | | | | | | | | | | | | | | - allocate gr.sm_error_state - handle event of sm error state - add callback of clear sm error state JIRA VFND-3291 Bug 200257899 Change-Id: I49b9437013e8c65290750b7fe21fc6819ea93b1c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1278397 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: restructure event handlingRichard Zhao2017-01-06
| | | | | | | | | | | | | | | | Take interrupts as one kind of event message, and make it easier to add new kind of events. JIRA VFND-3291 Bug 200257899 Change-Id: I83482293230c0aa10b05caf61e249a042bf6653c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1278396 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: no support for sparse mappingAparna Das2016-12-27
| | | | | | | | | | | | | | | | | | | | | | Currently sparse mapping is not supported for gp10b in virtualized environment. Modify gpu characteristics to reflect non-implementation of this functionality. Also fix return value in vgpu_gp10b_locked_gmmu_map() on error condition. Bug 200243373 Change-Id: Ia367b923b87738a5cad0617cdb074f5a24fb1c81 Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: http://git-master/r/1269710 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Sachit Kadle <skadle@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: fix va leak when call gk20a_vm_free_vaRichard Zhao2016-12-27
| | | | | | | | | | | | | | | | page size index needs to be set explicitly when call gk20a_vm_free_va. Bug 200255799 JIRA VFND-3033 Change-Id: Ic23ea68905ea423173d1859fd100e7b2c82a1bcc Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1262590 (cherry picked from commit 918aea147b395f7337db348d2616fb4b195dc53a) Reviewed-on: http://git-master/r/1263400 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: vgpu: add set_preemption_modeRichard Zhao2016-12-27
| | | | | | | | | | | | | Implement HAL callback set_preemption_mode Bug 200238497 JIRA VFND-2683 Change-Id: I8fca8e1ba112d8782ce18f0899eca38a1d12b512 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1236976 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: move to use vgpu_get_handle helper functionRichard Zhao2016-12-27
| | | | | | | | | | JIRA VFND-2103 Change-Id: Ic11cff40e64849cb6abb193bec54d03857433416 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: http://git-master/r/1185205 GVS: Gerrit_Virtual_Submit Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
* gpu: nvgpu: gp10x: support in-kernel vidmem mappingsKonsta Holtta2016-12-27
| | | | | | | | | | | | | | | Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that buffers represented as a mem_desc and present in vidmem can be mapped to gpu. JIRA DNVGPU-18 JIRA DNVGPU-76 Change-Id: Icd675e83e3c28836f0ed8880425748697713bb0a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/1169296 GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
* gpu: nvgpu: vgpu: Add CE engine to engine listTerje Bergstrom2016-12-27
| | | | | | | | | | | | | Initialize CE engine also for gp10b. Change-Id: Ibce2f80b523a09fb1345995c03c5430f3b20844f Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: http://git-master/r/1170453 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Tested-by: Aingara Paramakuru <aparamakuru@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Lakshmanan M <lm@nvidia.com>