diff options
author | Nick Piggin <npiggin@suse.de> | 2008-07-25 22:45:22 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-07-26 15:00:05 -0400 |
commit | 21cc199baa815d7b3f1ace4be20b9558cbddc00f (patch) | |
tree | eb4f3fa42a83613e2fe586b2555a811740952dce /include | |
parent | a0a8f5364a5ad248aec6cb705e0092ff563edc2f (diff) |
mm: introduce get_user_pages_fast
Introduce a new get_user_pages_fast mm API, which is basically a
get_user_pages with a less general API (but still tends to be suited to
the common case):
- task and mm are always current and current->mm
- force is always 0
- pages is always non-NULL
- don't pass back vmas
This restricted API can be implemented in a much more scalable way on many
architectures when the ptes are present, by walking the page tables
locklessly (no mmap_sem or page table locks). When the ptes are not
populated, get_user_pages_fast() could be slower.
This is implemented locklessly on x86, and used in some key direct IO call
sites, in later patches, which provides nearly 10% performance improvement
on a threaded database workload.
Lots of other code could use this too, depending on use cases (eg. grep
drivers/). And it might inspire some new and clever ways to use it.
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Cc: Zach Brown <zach.brown@oracle.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/mm.h | 33 |
1 files changed, 33 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index d87a5a5fe87d..f3fd70d6029f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h | |||
@@ -833,6 +833,39 @@ extern int mprotect_fixup(struct vm_area_struct *vma, | |||
833 | struct vm_area_struct **pprev, unsigned long start, | 833 | struct vm_area_struct **pprev, unsigned long start, |
834 | unsigned long end, unsigned long newflags); | 834 | unsigned long end, unsigned long newflags); |
835 | 835 | ||
836 | #ifdef CONFIG_HAVE_GET_USER_PAGES_FAST | ||
837 | /* | ||
838 | * get_user_pages_fast provides equivalent functionality to get_user_pages, | ||
839 | * operating on current and current->mm (force=0 and doesn't return any vmas). | ||
840 | * | ||
841 | * get_user_pages_fast may take mmap_sem and page tables, so no assumptions | ||
842 | * can be made about locking. get_user_pages_fast is to be implemented in a | ||
843 | * way that is advantageous (vs get_user_pages()) when the user memory area is | ||
844 | * already faulted in and present in ptes. However if the pages have to be | ||
845 | * faulted in, it may turn out to be slightly slower). | ||
846 | */ | ||
847 | int get_user_pages_fast(unsigned long start, int nr_pages, int write, | ||
848 | struct page **pages); | ||
849 | |||
850 | #else | ||
851 | /* | ||
852 | * Should probably be moved to asm-generic, and architectures can include it if | ||
853 | * they don't implement their own get_user_pages_fast. | ||
854 | */ | ||
855 | #define get_user_pages_fast(start, nr_pages, write, pages) \ | ||
856 | ({ \ | ||
857 | struct mm_struct *mm = current->mm; \ | ||
858 | int ret; \ | ||
859 | \ | ||
860 | down_read(&mm->mmap_sem); \ | ||
861 | ret = get_user_pages(current, mm, start, nr_pages, \ | ||
862 | write, 0, pages, NULL); \ | ||
863 | up_read(&mm->mmap_sem); \ | ||
864 | \ | ||
865 | ret; \ | ||
866 | }) | ||
867 | #endif | ||
868 | |||
836 | /* | 869 | /* |
837 | * A callback you can register to apply pressure to ageable caches. | 870 | * A callback you can register to apply pressure to ageable caches. |
838 | * | 871 | * |