summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/nvgpu/common
Commit message (Collapse)AuthorAge
* gpu: nvgpu: allocate separate client managed syncpoint for UserDeepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We right now allocate a nvgpu managed syncpoint in c->sync and share that with user space But to avoid conflicts between user space and kernel space increments allocate a separate "client managed" syncpoint for User space in c->user_sync Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed syncpoint from nvhost. Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a User space syncpoint is required or not Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate double syncpoints per channel on that platform For gv11b, once we move to use user space submits, support for c->sync will be dropped so we keep using only one syncpoint per channel Bug 200326065 Jira NVGPU-179 Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665828 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: support user fence updates"Deepak Nibade2018-03-01
| | | | | | | | | | | | | | | | | | | | This reverts commit 0c46f8a5e112c08c172ee2c692832e1753ffbcce. We should not support tracking of MAX/threshold value for syncpoint allocated by user space Hence revert this patch Bug 200326065 Jira NVGPU-179 Change-Id: I2df8f8c13fdac91c0814b11a2b7dee30153409d4 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665827 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: do not alloc ctx buffers if already allocatedSeema Khowala2018-03-01
| | | | | | | | | | | | | | Bug 200393029 Change-Id: Ic2946958e34bcb9247179fcf2e8735c822155cce Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665338 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com> Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: introduce explicit nvgpu_sgl typeKonsta Holtta2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl) argument which is a void pointer. Change the type signatures to take struct nvgpu_sgl * which is an opaque marker type that makes it more difficult to pass around wrong arguments, as anything goes for void *. Explicit types add also self-documentation to the code. For some added safety, some explicit type casts are now required in implementors of the nvgpu_sgt_ops interface when converting between the general nvgpu_sgl type and implementation-specific types. This is not purely a bad thing because the casts explain clearly where type conversions are happening. Jira NVGPU-30 Jira NVGPU-52 Jira NVGPU-305 Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1643555 GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"Alex Waterman2018-02-28
| | | | | | | | | | | | | | Also revert other changes related to IO coherence. This may be the culprit in a recent dev-kernel lockdown. Bug 2070609 Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1665914 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
* gpu: nvgpu: Use our own vmap() for coherent DMA buffersAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | For some reason the GPU does not like the mappings created by the DMA API for coherent sysmem buffers. But a plain vmap() does seem to work. To work around this, when we are using coherent sysmem, force the NO_KERNEL_MAPPING flag to on and then make a vmap() in the nvgpu DMA API wrapper. The rest of the driver will be none the wiser but will work as expected. This problem is not understood yet but it is being tracked in bug 2040115. Once this bug is understood this WAR should either be determined as necessary or reverted with an appropriate fix. Bug 2040115 JIRA EVLR-2333 Change-Id: Idae7a0c92441f0309df572ac18697af49bb6ff2b Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657568 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use coherent aperture flagAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a coherent DMA API wee must make sure to program any aperture fields with the coherent aperture setting. To do this the nvgpu_aperture_mask() function was modified to take a third aperture mask argument, a coherent setting, so that code can use this function to generate coherent aperture settings. The aperture choice is some what tricky: the default version of this function uses the state of the DMA API to determine what aperture to use for SYSMEM: either coherent or non-coherent internally. Thus a kernel user need only specify the normal nvgpu_mem struct and the correct mask should be chosen. Due to many uses of nvgpu_mem structs not created directly from the DMA API wrapper it's easier to translate SYSMEM to SYSMEM_COH after creation. However, the GMMU mapping code, will encounter buffers from userspace with difference coerency attributes than the DMA API. Thus the __nvgpu_aperture_mask() really respects the aperture setting passed in regardless of the DMA API state. This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT since this is either passed in from userspace or set by the kernel when using coherent DMA. The aperture field in attrs is upgraded to coh if this flag is set. This change also adds a coherent sysmem mask everywhere that it can. There's a couple places that do not have a coherent register field defined yet. These need to eventually be defined and added. Lastly the aperture mask code has been mvoed from the Linux vm.c code to the general vm.c code since this function has no Linux dependencies. Note: depends on https://git-master.nvidia.com/r/1664536 for new register fields. JIRA EVLR-2333 Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655220 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: FIFO sched fixesAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | Miscellaneous fixes for the sched code: 1. Make sure get_addr() on an SGL respects the use phys flag since the runlist needs physical addresses when NVLINK is in use. 2. Ensure the runlist is contiguous. Since the runlist memory is not virtually addressed the buffer must be physically contiguous. 3. Use all 64 bits of the runlist address in the runlist base addr register (and related fields). JIRA EVLR-2333 Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654667 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Thomas Fleury <tfleury@nvidia.com> Tested-by: Thomas Fleury <tfleury@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Get coherency on gv100 + NVLINK workingAlex Waterman2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does a couple of things. First it renames NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former is somewhat ambiguous in meaning. The latter clearly states what must happen: nvgpu needs to treat sysmem as coherent. This flag does simply follow the state of the DMA API but there's no reason to expect a casual reader of the code to know that when the DMA API is coherent nvgpu must treat sysmem as coherent. One thing to note though: when the dGPU is using PCIe and the PCIe controller is coherent, it doesn't actually matter what we do. However, we use this flag for determining how to make CPU mappings in nvgpu_mem_begin() so this flag is still relevant for the CPU side of things. Next this patch adds a check in the core kernel GMMU mapping routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag is set that the IO coherent flag is passed into the mapping code. This is the primary fix that made NVLINK start working. Finally the setting of the USE_COHERENT_SYSMEM flag and the NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for iGPUs. The iGPU also must correctly match it's CPU mappings and GPU mappings for proper operation. JIRA EVLR-2333 Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654519 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move common files out of linux folderRichard Zhao2018-02-27
| | | | | | | | | | | | | Most of files have been moved out of linux folder. More code could be common as halifying going on. Jira EVLR-2364 Change-Id: Ia9dbdbc82f45ceefe5c788eac7517000cd455d5e Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649947 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: avoid referring uapi header when set powergate modeRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | Defined powergate mode in tegra_vgpu.h. Jira EVLR-2364 Change-Id: Id7dcaeffcf0dd8394b4eac6601152720fa382e8c Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649946 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove unused uapi header includingsRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's the first step to avoid including uapi headers. Jira EVLR-2364 Change-Id: Iea572d64a7076bcd3fd1e7196c197c952c04595b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649945 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add function vgpu_is_reduced_bar1()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | The implementation is os specific for now. Jira EVLR-2364 Change-Id: I8ac390b056aa9ca5b5d4ab2ac4dbc06f6689f4a4 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649944 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: split vgpu.c into vgpu.c and vgpu_linux.cRichard Zhao2018-02-27
| | | | | | | | | | | | vgpu.c will keep common code whil vgpu_linux.c is linux specific. Jira EVLR-2364 Change-Id: Ice9782fa96c256f1b70320886d3720ab0db26244 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649943 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: avoid using sg_table when map bar1Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | Move to use OS agnostic function nvgpu_mem_get_addr(). Jira EVLR-2364 Change-Id: I2f38567cae35c5d410f082785213af6052150c27 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649942 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move to nvgpu_err/info from dev_err/infoRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It helps code be more portable. Jira EVLR-2364 Change-Id: I0cc1fa739d7884d3c863975f08b3b592acd34613 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649941 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move to nvgpu_msleep()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's part of effort of unify vgpu. Jira EVLR-2364 Change-Id: Ieef95fbf1c4bdcec548e4dc741d771a74ebefb9b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649940 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove linux/string.hRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | It's already included indirectly. Jira EVLR-2364 Change-Id: I9ad134b82a24ffd84358d957ee9b56e1ce4558ba Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649939 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove smmu checkingsRichard Zhao2018-02-27
| | | | | | | | | | | | | | | | Currently vgpu always disable smmu. Jira EVLR-2364 Change-Id: I54dfa5ff6bfda56975617ec526d80359bf3cf672 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649938 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove vgpu_locked_gmmu_map()Richard Zhao2018-02-27
| | | | | | | | | | | | | | The function is not used anymore. Change-Id: Iad99811e2d356362d16b961464729f5169c36f28 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649937 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: move tegra_vgpu.h to include/nvgpu/vgpu/Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | tegra_vgpu.h is os agnostic, so move it out of linux folder. Jira EVLR-2364 Change-Id: Ibbe8923f7af036b3b6730f682f5243ca73810f7b Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649936 Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add ivm wrappersRichard Zhao2018-02-27
| | | | | | | | | | | | Added vgpu_ivm_*() functions to be used by os agnostic code. Jira EVLR-2364 Change-Id: I4a2baebcff9723950c4fba99d0879a0c61e3e3a2 Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649935 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: remove parameter dn from tegra_hv_mempool_reserve()Richard Zhao2018-02-27
| | | | | | | | | | | | | | | | | | Device tree node was not used by the function, so remove it to make the function more common. Jira EVLR-2364 Change-Id: I34b143a10c021030a1e94f019081b352f72a51bf Signed-off-by: Richard Zhao <rizhao@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1647032 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Nirav Patel <nipatel@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Update PMU firmware.Deepak Goyal2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | PMU ucode is updated to include engine ID in the PG messages sent from PMU to gpu driver. Right now we were getting random values from the PMU ucode as it uses ELPG msg structure without initializing. It further causes incorrect values of ELPG state variables maintained in the nvgpu driver. PMU ucode update: https://git-master.nvidia.com/r/1661642 Bug 2046561 Change-Id: Iec1ba87b8d0c0c7ac7423f782fd5a0333a4b5842 Signed-off-by: Deepak Goyal <dgoyal@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661653 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: support user fence updatesDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for user fence updates i.e. increments added by user space in pushbuffer directly Add a submit IOCTL flag NVGPU_SUBMIT_GPFIFO_FLAGS_USER_FENCE_UPDATE to indicate if User has added increments in pushbuffer If yes, number_of_increment value is received in fence.value from User If User is adding increments in the pushbuffer then we don't need to do any job tracking in the kernel So fail the submit if we evaluate need_job_tracking to true and FLAGS_USER_FENCE_UPDATE is set User is responsible for ensuring all pre-requisites for a fast submit and to prevent kernel job tracking Since user space adds increments in the pushbuffer, just handle the threshold book keeping in kernel. Bug 200326065 Jira NVGPU-179 Change-Id: Ic0f0b1aa69e3389a4c3305fb6a559c5113719e0f Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1661854 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get a syncpointDeepak Nibade2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose per-channel allocated syncpoint to user space API will also return current value of the syncpoint On supported platforms, this API will also return a RW semaphore address (corresponding to syncpoint shim) to user space Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate support for this new API Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver Set this flag for GV11B and GP10B for now Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA address of a syncpoint Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX value of syncpoint Bug 200326065 Jira NVGPU-179 Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1658009 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv100: nvlink endpoint driverThomas Fleury2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The following changes implements the initial (as per bringup) nvlink driver. (1) SW initialization of nvlink core driver structures (2) Nvlink interrupt handling (3) Device initialization (IOCTRL, pll and clocks, device level intr) (4) Falcon support for minion (5) Minion load and bootstrapping (6) Link initialization and DL PROD settings (7) Device Interface init (and switching HSHUB to nvlink) (8) HS set/get mode for both link and sublink (9) Topology discovery and VBIOS settings. (10) Ensures we get physical contiguous memory when Nvlink is enabled This driver includes a hack for the current single dev/single link limitation. JIRA: EVLR-2331 JIRA: EVLR-2330 JIRA: EVLR-2329 JIRA: EVLR-2328 Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5 Signed-off-by: Thomas Fleury <tfleury@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1656790 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Abstract kernel_restart()Alex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | This function is used in gk20a.c to handle catastrophic error conditions but is Linux specific. As such, implement an abstraction for this in driver_common.c and expose the API in nvgpu_common.h. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: Ie2e417d30af5ff7db76f4d2d5b97ec96c386bd04 Reviewed-on: https://git-master.nvidia.com/r/1662543 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Add missing log2 header includeAlex Waterman2018-02-24
| | | | | | | | | | | | | | | | | | These two files (common/mm/vm.c and common/as.c) both used functions defined in log2.h but do not include log2.h. This went unnoticed in nvgpu on Tegra, but are an issue for POSIX. JIRA NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: I09250f6928f5cb26bb6b7fbdae13cb703bd8f27b Reviewed-on: https://git-master.nvidia.com/r/1662541 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Deepak Nibade <dnibade@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup map attributes debuggingAlex Waterman2018-02-22
| | | | | | | | | | | Make the map attributes printed by map debug code are more easily readable and consistent. Change-Id: I9737131a2ea44c6a080dff0095929760888b83ae Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654518 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: When NVLINK is enabled use phys addressesAlex Waterman2018-02-21
| | | | | | | | | | | | | When NVLINK is enabled we need to use phys addresses from the SGT since NVLINK bypasses the SMMU. JIRA EVLR-2333 Change-Id: Ibfc0454fa7616056761f8626f2a611749775d091 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654561 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: POSIX does not have a strlcpyAlex Waterman2018-02-16
| | | | | | | | | | | | | | | | | | So don't use it in common code. This could be implemented in common code but it would just be a wrapper around strncpy() most likely since we aren't going to maintain low level (possibly asm) implementations of APIs. NVGPU-525 Signed-off-by: Alex Waterman <alexw@nvidia.com> Change-Id: If446589cd1736456184daa75ae539c4ce332b741 Reviewed-on: https://git-master.nvidia.com/r/1658300 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: vgpu: add characteristic flag for syncpoint address supportLakshmanan M2018-02-16
| | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Bug 200327559 Change-Id: I20f532e22c29d1adaff0fbc4204e36cc8455e572 Signed-off-by: Lakshmanan M <lm@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657983 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Tested-by: Jitendra Pratap Singh Chauhan <jchauhan@nvidia.com> Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Allow disabling CDE functionalityTerje Bergstrom2018-02-15
| | | | | | | | | | | | | CDE is a Tegra SoC specific feature. Add new config option CONFIG_NVGPU_SUPPORT_CDE and #ifdef all CDE specific code with it. JIRA NVGPU-4 Change-Id: I6f0b0047d6ba2b5c36c2eb9b8a1514776741f5b5 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648002 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: limit min freq to 216.75Mhzseshendra Gadagottu2018-02-15
| | | | | | | | | | | | | | | | | Until issue related to low frequencies root caused, limit min frequency to known safe value: 216.75Mhz. This change needs to be reverted, once orginal issue root-caused and fixed. Bug 2051863 Bug 2056266 Change-Id: If6e56f59ee5fa06967fde1128b58a7fc97be74e9 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1657595 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove the use of READ_ONLY for DMA APITerje Bergstrom2018-02-15
| | | | | | | | | | | | | | READ_ONLY flag for dma API is a Tegra specific API. We use it only to prevent accidental writes to non-secure ACR bootloader. Its use is marginal, so remove the flag. JIRA NVGPU-4 Change-Id: I887dc04aee8f7ace40220294851b210375dfde98 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648174 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Use preallocated VPR bufferTerje Bergstrom2018-02-15
| | | | | | | | | | | | To prevent deadlock while allocating VPR in nvgpu, allocate all the needed VPR memory at probe time and use an internal allocator to hand out space for VPR buffers. Change-Id: I584b9a0f746d5d1dec021cdfbd6f26b4b92e4412 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1655324 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: delete nvgpu_semaphore_int listKonsta Holtta2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The hw semas in a sema pool are stored in a list. All elements in this list are freed in a loop when a semaphore pool is destroyed. However, each hw sema is always owned by a channel, and each such channel frees its hw sema during channel closure before putting a ref to the VM which holds a ref to the sema pool, so the lifetime of all the hw semas is shorter than that of the pool and this list is always empty when freeing the pool. Delete the list and this freeing loop. Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is never accessed. Jira NVGPU-512 Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653540 Reviewed-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: enable elpgseshendra Gadagottu2018-02-13
| | | | | | | | | | | | | | | | Enabled Engine Level Power Gating for gv11b. Bug 2051863 Change-Id: I59a51dbe8fa9f13e4b8be03f02e1571093fdaeb0 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646322 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: call nvgpu_init_mm_vars just after probeSeema Khowala2018-02-09
| | | | | | | | | | | | | | | It is good to init mm vars right after probe as driver is heavily dependent on enabled flags for all kinds of memory related needs Change-Id: I62ca280ff9240649798faa34767f7dc9ea3c0db1 Signed-off-by: Seema Khowala <seemaj@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649724 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: Fix CBC base calculusSami Kiminki2018-02-09
| | | | | | | | | | | | | | On GV11B, CBC base is calculated in similar fashion than it's calculated on dGPUs. Thus, remove gv11b_ltc_cbc_fix_config() as it would incorrectly multiply the CBC base by the LTC count. Bug 2054860 Change-Id: Iaed717161547468c17e12236149d970c497885b3 Signed-off-by: Sami Kiminki <skiminki@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1654506 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: handle pm_prepare_poweroff failureseshendra Gadagottu2018-02-09
| | | | | | | | | | | | | | As part of gk20a_pm_prepare_poweroff, gpu hw state is destroyed even in case of any errors. So try to recover from that situation by calling gk20a_pm_finalize_poweron. Bug 200380708 Change-Id: Ibff656cda67241ad111fd22701e05871f20d6f70 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1653750 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Rely on own dma attribute handlingTerje Bergstrom2018-02-08
| | | | | | | | | | | | | | Tegra kernel abstracts Linux 4.4 vs Linux 4.9 differences from drivers. Upstream kernel does not provide that facility, so add nvgpu internal way of dealing with the differences. JIRA NVGPU-4 Change-Id: I8289fdcf98873de14398bffc808d89a675f2aa15 Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648160 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add vpr flag in gpu characteristicsAparna Das2018-02-08
| | | | | | | | | | | | | | | | | | | VPR is currently not supported in virtualized configuration. Allow reporting VPR capability in gpu characteristics Jira EVLR-2236 Change-Id: Id61a0045577e4add0d9cdfddcefcedd5b20eb1dd Signed-off-by: Aparna Das <aparnad@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1639798 (cherry picked from commit 4136b74fd4435966ee2e69ec88fb66424382a7c0) Reviewed-on: https://git-master.nvidia.com/r/1640712 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add user API to get read-only syncpoint address mapDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint address map in user space We already map whole syncpoint shim to each address space with base address being vm->syncpt_ro_map_gpu_va This new API exposes this base GPU_VA address of syncpoint map, and unit size of each syncpoint to user space. User space can then calculate address of each syncpoint as syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size) Note that this syncpoint address is read_only, and should be only used for inserting semaphore acquires. Adding semaphore release with this address would result in MMU_FAULT Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported on Xavier SoC Bug 200327559 Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649365 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: add characteristic flag for syncpoint address supportDeepak Nibade2018-02-07
| | | | | | | | | | | | | | | | | | | | | Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if platform supports semaphore GPU_VA address for a syncpoint Define NVGPU_SUPPORT_SYNCPOINT_ADDRESS for core driver book keeping Set this flag for both GV100 and GV11B since Xavier SoC supports a semaphore GPU_VA address for a syncpoint through syncpoint SHIM Bug 200327559 Change-Id: I1f31673c9fd59f493d0b35a80d23151fc063ae06 Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649364 Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: gv11b: add scg support info in gpu characteristicsseshendra Gadagottu2018-02-02
| | | | | | | | | | | | | Indicated support for Simultaneous Compute and Graphics(SCG) in gpu characteristics for gv11b. Bug 2053932 Change-Id: I788e22242083dff775dd4cc5b9aa73c938028536 Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1649805 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Cleanup usage of bypass_smmuAlex Waterman2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GPU has multiple different operating modes in respect to IOMMU'ability. As such there needs to be a clean way to tell the driver whether it is IOMMU'able or not. This state also does not always reflect what is possible: all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it wants to. The nvgpu_iommuable() API has now existed for a little while which is a useful way to convey whether nvgpu should consider the GPU as IOMMU'able. However, there is also the g->mm.bypass_smmu flag which used to be able to override what the GPU decided it should do. Typically it was assigned the same value as nvgpu_iommuable() but that was not necessarily a requirment. This patch removes all the usages of g->mm.bypass_smmu and instead uses the nvgpu_iommuable() function. All places where the check against g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code should now be much cleaner. Subsequently other checks can also be placed in the nvgpu_iommuable() function. For example, when NVLINK comes online and the GPU should no longer consider DMA addresses and instead use scatter-gather lists directly the ngpu_iommuable() function will be able to check the state of NVLINK and then act accordingly. Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1648332 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Remove init_state initialization codeTejal Kudav2018-02-02
| | | | | | | | | | | | | | | | | | nvlink core library no longer exposes the set_init_state() interface as it wishes to block init_state changes from endpoint drivers. Now, the core driver is responsible for initializing init_state variables using set_init_state() interface. Hence, we remove this redundant code. Change-Id: I81c4922cf48f7918e69795579b39b7fa0c299644 Signed-off-by: Tejal Kudav <tkudav@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1646437 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Adeel Raza <araza@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
* gpu: nvgpu: Set DMA mask to 34 bitsAlex Waterman2018-02-01
| | | | | | | | | | | | | | | | | | Set the DMA mask to 34 bits so that large DMA allocs can be done. Currently the DMA mask is left unset which limits the size of the maximum DMA allocation to 32 bits. The 34 bit mask was chosen because it works for all chips (even gm20b supports 34 bit physical addresses). However, newer chips could use larger masks in the future if they desire. Bug 200377221 Change-Id: Iaa0543f77ff4e2bd6616f38e4464240375bb37b6 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/1641762 Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com> Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>