aboutsummaryrefslogtreecommitdiffstats
path: root/mm/mmap.c
diff options
context:
space:
mode:
authorJeremy Fitzhardinge <jeremy@goop.org>2009-02-11 16:04:41 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2009-02-11 17:25:37 -0500
commit9480c53e9b2aa13a06283ffb96bb8f1873ac4e9a (patch)
tree4d7949c56e6877f0aac372bebed379eda7b0e646 /mm/mmap.c
parent3abdbf90a3ffb006108c831c56b092e35483b6ec (diff)
mm: rearrange exit_mmap() to unlock before arch_exit_mmap
Christophe Saout reported [in precursor to: http://marc.info/?l=linux-kernel&m=123209902707347&w=4]: > Note that I also some a different issue with CONFIG_UNEVICTABLE_LRU. > Seems like Xen tears down current->mm early on process termination, so > that __get_user_pages in exit_mmap causes nasty messages when the > process had any mlocked pages. (in fact, it somehow manages to get into > the swapping code and produces a null pointer dereference trying to get > a swap token) Jeremy explained: Yes. In the normal case under Xen, an in-use pagetable is "pinned", meaning that it is RO to the kernel, and all updates must go via hypercall (or writes are trapped and emulated, which is much the same thing). An unpinned pagetable is not currently in use by any process, and can be directly accessed as normal RW pages. As an optimisation at process exit time, we unpin the pagetable as early as possible (switching the process to init_mm), so that all the normal pagetable teardown can happen with direct memory accesses. This happens in exit_mmap() -> arch_exit_mmap(). The munlocking happens a few lines below. The obvious thing to do would be to move arch_exit_mmap() to below the munlock code, but I think we'd want to call it even if mm->mmap is NULL, just to be on the safe side. Thus, this patch: exit_mmap() needs to unlock any locked vmas before calling arch_exit_mmap, as the latter may switch the current mm to init_mm, which would cause the former to fail. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Christophe Saout <christophe@saout.de> Cc: Keir Fraser <keir.fraser@eu.citrix.com> Cc: Christophe Saout <christophe@saout.de> Cc: Alex Williamson <alex.williamson@hp.com> Cc: <stable@kernel.org> [2.6.28.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mmap.c')
-rw-r--r--mm/mmap.c10
1 files changed, 6 insertions, 4 deletions
diff --git a/mm/mmap.c b/mm/mmap.c
index eb1270bebe67..00ced3ee49a8 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2084,12 +2084,8 @@ void exit_mmap(struct mm_struct *mm)
2084 unsigned long end; 2084 unsigned long end;
2085 2085
2086 /* mm's last user has gone, and its about to be pulled down */ 2086 /* mm's last user has gone, and its about to be pulled down */
2087 arch_exit_mmap(mm);
2088 mmu_notifier_release(mm); 2087 mmu_notifier_release(mm);
2089 2088
2090 if (!mm->mmap) /* Can happen if dup_mmap() received an OOM */
2091 return;
2092
2093 if (mm->locked_vm) { 2089 if (mm->locked_vm) {
2094 vma = mm->mmap; 2090 vma = mm->mmap;
2095 while (vma) { 2091 while (vma) {
@@ -2098,7 +2094,13 @@ void exit_mmap(struct mm_struct *mm)
2098 vma = vma->vm_next; 2094 vma = vma->vm_next;
2099 } 2095 }
2100 } 2096 }
2097
2098 arch_exit_mmap(mm);
2099
2101 vma = mm->mmap; 2100 vma = mm->mmap;
2101 if (!vma) /* Can happen if dup_mmap() received an OOM */
2102 return;
2103
2102 lru_add_drain(); 2104 lru_add_drain();
2103 flush_cache_mm(mm); 2105 flush_cache_mm(mm);
2104 tlb = tlb_gather_mmu(mm, 1); 2106 tlb = tlb_gather_mmu(mm, 1);