diff options
author | Hugh Dickins <hughd@google.com> | 2010-07-30 13:58:26 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2010-07-30 21:56:09 -0400 |
commit | de51257aa301652876ab6e8f13ea4eadbe4a3846 (patch) | |
tree | 388ee39bed1d7e362438d047b57399a28e2617f8 /mm/memory.c | |
parent | 51c20fcced5badee0e2021c6c89f44aa3cbd72aa (diff) |
mm: fix ia64 crash when gcore reads gate area
Debian's ia64 autobuilders have been seeing kernel freeze or reboot
when running the gdb testsuite (Debian bug 588574): dannf bisected to
2.6.32 62eede62dafb4a6633eae7ffbeb34c60dba5e7b1 "mm: ZERO_PAGE without
PTE_SPECIAL"; and reproduced it with gdb's gcore on a simple target.
I'd missed updating the gate_vma handling in __get_user_pages(): that
happens to use vm_normal_page() (nowadays failing on the zero page),
yet reported success even when it failed to get a page - boom when
access_process_vm() tried to copy that to its intermediate buffer.
Fix this, resisting cleanups: in particular, leave it for now reporting
success when not asked to get any pages - very probably safe to change,
but let's not risk it without testing exposure.
Why did ia64 crash with 16kB pages, but succeed with 64kB pages?
Because setup_gate() pads each 64kB of its gate area with zero pages.
Reported-by: Andreas Barth <aba@not.so.argh.org>
Bisected-by: dann frazier <dannf@debian.org>
Signed-off-by: Hugh Dickins <hughd@google.com>
Tested-by: dann frazier <dannf@dannf.org>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 16 |
1 files changed, 13 insertions, 3 deletions
diff --git a/mm/memory.c b/mm/memory.c index 119b7ccdf39..bde42c6d363 100644 --- a/mm/memory.c +++ b/mm/memory.c | |||
@@ -1394,10 +1394,20 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, | |||
1394 | return i ? : -EFAULT; | 1394 | return i ? : -EFAULT; |
1395 | } | 1395 | } |
1396 | if (pages) { | 1396 | if (pages) { |
1397 | struct page *page = vm_normal_page(gate_vma, start, *pte); | 1397 | struct page *page; |
1398 | |||
1399 | page = vm_normal_page(gate_vma, start, *pte); | ||
1400 | if (!page) { | ||
1401 | if (!(gup_flags & FOLL_DUMP) && | ||
1402 | is_zero_pfn(pte_pfn(*pte))) | ||
1403 | page = pte_page(*pte); | ||
1404 | else { | ||
1405 | pte_unmap(pte); | ||
1406 | return i ? : -EFAULT; | ||
1407 | } | ||
1408 | } | ||
1398 | pages[i] = page; | 1409 | pages[i] = page; |
1399 | if (page) | 1410 | get_page(page); |
1400 | get_page(page); | ||
1401 | } | 1411 | } |
1402 | pte_unmap(pte); | 1412 | pte_unmap(pte); |
1403 | if (vmas) | 1413 | if (vmas) |