| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Put all debugfs dependencies inside #ifdef CONFIG_DEBUG_FS. This
includes some functions in allocators that were used only for
debugging.
Remove include of linux/debugfs.h on files that do not deal with
debugfs.
linux/debugfs.h implicitly included linux/fs.h, which we relied on.
Add explicit include of linux/fs.h for all files where this is the
case.
Change-Id: I16feffae6b0e3a2edf366075cdc01ade86be06f9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1467897
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove use of linux/kernel.h and linux/compiler.h. We don't use
anything in those headers.
Also replace use of linux/bitops.h with new wrapper nvgpu/bitops.h.
JIRA NVGPU-13
Change-Id: Iefa6b4598d5a5e7fc386c0a7a554e778a87010d6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1460777
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove all usage of vmalloc.h and slab.h outside of the Linux specific
kmem API implementation code.
Bug 1799159
Bug 1823380
Change-Id: I5b2a91bd1057b272efeaddc24902f6133b35024f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1331703
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the new kmem API functions in common/* and common/mm/*.
Add a struct gk20a pointer to struct nvgpu_allocator in order
to store the gk20a pointer used for allocating memory.
Bug 1799159
Bug 1823380
Change-Id: I881ea9545e8a8f0b75d77a1e35dd1812e0bb654e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1318315
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- check return value of nvgpu_mutex_init in nvgpu_allocator
- add corresponding nvgpu_mutex_destroy calls
Jira NVGPU-13
Change-Id: I1522b53fb4126fdc2b24fda4b86b45987fe3d203
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1317137
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Navneet Kumar <navneetk@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using Linux APIs for mutex and spinlocks
directly, use new APIs defined in <nvgpu/lock.h>
Replace Linux specific mutex/spinlock declaration,
init, lock, unlock APIs with new APIs
e.g
struct mutex is replaced by struct nvgpu_mutex and
mutex_lock() is replaced by nvgpu_mutex_acquire()
And also include <nvgpu/lock.h> instead of including
<linux/mutex.h> and <linux/spinlock.h>
Add explicit nvgpu/lock.h includes to below
files to fix complilation failures.
gk20a/platform_gk20a.h
include/nvgpu/allocator.h
Jira NVGPU-13
Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1293187
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The basic structure of this patch is to make the small page allocator
and the large page allocator into pointers (where they used to be just
structs). Then assign each of those pointers to the same actual
allocator since the buddy allocator has supported mixed page sizes
since its inception.
For the rest of the driver some changes had to be made in order to
actually support mixed pages in a single address space.
1. Unifying the allocation page size determination
Since the allocation and map operations happen at distinct
times both mapping and allocation of GVA space must agree
on page size. This is because the allocation has to separate
allocations into separate PDEs to avoid the necessity of
supporting mixed PDEs.
To this end a function __get_pte_size() was introduced which
is used both by the balloc code and the core GPU MM code. It
determines page size based only on the length of the mapping/
allocation.
2. Fixed address allocation + page size
Similar to regular mappings/GVA allocations fixed address
mapping page size determination had to be modified. In the
past the address of the mapping determined page size since
the address space split was by address (low addresses were
small pages, high addresses large pages). Since that is no
longer the case the page size field in the reserve memory
ioctl is now honored by the mapping code. When, for instance,
CUDA makes a memory reservation it specifies small or large
pages. When CUDA requests mappings to be made within that
address range the page size is then looked up in the reserved
memory struct.
Fixed address reservations were also modified to now always
allocate at a PDE granularity (64M or 128M depending on
large page size. This prevents non-fixed allocations from
ending up in the same PDE and causing kernel panics or GMMU
faults.
3. The rest...
The rest of the changes are just by products of the above.
Lots of places required minor updates to use a pointer to
the GVA allocator struct instead of the struct itself.
Lastly, this change is not truly complete. More work remains to be
done in order to fully remove the notion that there was such a thing
as separate address spaces for different page sizes. Basically after
this patch what remains is cleanup and proper documentation.
Bug 1396644
Bug 1729947
Change-Id: If51ab396a37ba16c69e434adb47edeef083dce57
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265300
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
Move the GPU allocators to common/mm/ since the allocators are common
code across all GPUs. Also rename the allocator code to move away from
gk20a_ prefixed structs and functions.
This caused one issue with the nvgpu_alloc() and nvgpu_free() functions.
There was a function for allocating either with kmalloc() or vmalloc()
depending on the size of the allocation. Those have now been renamed to
nvgpu_kalloc() and nvgpu_kfree().
Bug 1799159
Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1274400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
|