diff options
author | Deepak Nibade <dnibade@nvidia.com> | 2016-07-28 05:07:18 -0400 |
---|---|---|
committer | mobile promotions <svcmobile_promotions@nvidia.com> | 2016-09-01 12:10:20 -0400 |
commit | f79639f61858c377cf1f3facfc0ce631f787f0e6 (patch) | |
tree | 188d4033c93fcf0e5b819c074dc436b9b36f448e /include/trace | |
parent | aa7f4bf251ee6346bf300f3793002eb4a7f05562 (diff) |
gpu: nvgpu: clear whole vidmem on first allocation
We currently clear vidmem pages in gk20a_gmmu_alloc_attr_vid_at()
i.e. allocation path for each buffer
But since buffer allocation path could be latency critical,
clear whole vidmem first and before first User allcation
in gk20a_vidmem_buf_alloc()
And then clear buffer pages while releasing the buffer
In this way, we can ensure that vidmem pages are already cleared
during buffer allocation path
At a later stage, clearing of pages can be removed from free path
and moved to a separate worker as well
At this point, first allocation has overhead of clearing whole
vidmem which takes about 380mS and this should improve once
clocks are raised.
Also, this is one time larency, and subsequent allocations
should not have any overhead for clearing at all
Add API gk20a_vidmem_clear_all() to clear whole vidmem
We have WPR buffers allocated during boot up and
at fixed address in vidmem.
To prevent overwriting to these buffers in gk20a_vidmem_clear_all(),
clear whole vidmem except for the bootstrap allocator carveout
Add new API gk20a_gmmu_clear_vidmem_mem() to clear one mem_desc
Jira DNVGPU-84
Change-Id: I5661700585c6241a6a1ddeb5b7c068d3d2aed4b3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1194301
(cherry picked from commit 950ab61a04290ea405968d8b0d03e3bd044ce83d)
Reviewed-on: http://git-master/r/1193158
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Diffstat (limited to 'include/trace')
0 files changed, 0 insertions, 0 deletions