diff options
author | Dave Hansen <dave.hansen@linux.intel.com> | 2016-02-12 16:01:56 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-02-16 04:11:12 -0500 |
commit | d4edcf0d56958db0aca0196314ca38a5e730ea92 (patch) | |
tree | cf22f82e4768f9db3b7b59491c5188e3d725ff9a /arch/s390/mm | |
parent | cde70140fed8429acf7a14e2e2cbd3e329036653 (diff) |
mm/gup: Switch all callers of get_user_pages() to not pass tsk/mm
We will soon modify the vanilla get_user_pages() so it can no
longer be used on mm/tasks other than 'current/current->mm',
which is by far the most common way it is called. For now,
we allow the old-style calls, but warn when they are used.
(implemented in previous patch)
This patch switches all callers of:
get_user_pages()
get_user_pages_unlocked()
get_user_pages_locked()
to stop passing tsk/mm so they will no longer see the warnings.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: jack@suse.cz
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20160212210156.113E9407@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/s390/mm')
-rw-r--r-- | arch/s390/mm/gup.c | 4 |
1 files changed, 1 insertions, 3 deletions
diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c index 13dab0c1645c..49a1c84ed266 100644 --- a/arch/s390/mm/gup.c +++ b/arch/s390/mm/gup.c | |||
@@ -210,7 +210,6 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, | |||
210 | int get_user_pages_fast(unsigned long start, int nr_pages, int write, | 210 | int get_user_pages_fast(unsigned long start, int nr_pages, int write, |
211 | struct page **pages) | 211 | struct page **pages) |
212 | { | 212 | { |
213 | struct mm_struct *mm = current->mm; | ||
214 | int nr, ret; | 213 | int nr, ret; |
215 | 214 | ||
216 | might_sleep(); | 215 | might_sleep(); |
@@ -222,8 +221,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write, | |||
222 | /* Try to get the remaining pages with get_user_pages */ | 221 | /* Try to get the remaining pages with get_user_pages */ |
223 | start += nr << PAGE_SHIFT; | 222 | start += nr << PAGE_SHIFT; |
224 | pages += nr; | 223 | pages += nr; |
225 | ret = get_user_pages_unlocked(current, mm, start, | 224 | ret = get_user_pages_unlocked(start, nr_pages - nr, write, 0, pages); |
226 | nr_pages - nr, write, 0, pages); | ||
227 | /* Have to be a bit careful with return values */ | 225 | /* Have to be a bit careful with return values */ |
228 | if (nr > 0) | 226 | if (nr > 0) |
229 | ret = (ret < 0) ? nr : ret + nr; | 227 | ret = (ret < 0) ? nr : ret + nr; |