diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2011-05-24 20:12:04 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-05-25 11:39:17 -0400 |
commit | 97a894136f29802da19a15541de3c019e1ca147e (patch) | |
tree | 1fd3f92ba92a37d5d8527a1f41458091d0a944dc /mm/mmap.c | |
parent | e4c70a6629f9c74c4b0de258a3951890e9047c82 (diff) |
mm: Remove i_mmap_lock lockbreak
Hugh says:
"The only significant loser, I think, would be page reclaim (when
concurrent with truncation): could spin for a long time waiting for
the i_mmap_mutex it expects would soon be dropped? "
Counter points:
- cpu contention makes the spin stop (need_resched())
- zap pages should be freeing pages at a higher rate than reclaim
ever can
I think the simplification of the truncate code is definitely worth it.
Effectively reverts: 2aa15890f3c ("mm: prevent concurrent
unmap_mapping_range() on the same inode") and takes out the code that
caused its problem.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Miller <davem@davemloft.net>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Namhyung Kim <namhyung@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mmap.c')
-rw-r--r-- | mm/mmap.c | 13 |
1 files changed, 1 insertions, 12 deletions
@@ -445,10 +445,8 @@ static void vma_link(struct mm_struct *mm, struct vm_area_struct *vma, | |||
445 | if (vma->vm_file) | 445 | if (vma->vm_file) |
446 | mapping = vma->vm_file->f_mapping; | 446 | mapping = vma->vm_file->f_mapping; |
447 | 447 | ||
448 | if (mapping) { | 448 | if (mapping) |
449 | spin_lock(&mapping->i_mmap_lock); | 449 | spin_lock(&mapping->i_mmap_lock); |
450 | vma->vm_truncate_count = mapping->truncate_count; | ||
451 | } | ||
452 | 450 | ||
453 | __vma_link(mm, vma, prev, rb_link, rb_parent); | 451 | __vma_link(mm, vma, prev, rb_link, rb_parent); |
454 | __vma_link_file(vma); | 452 | __vma_link_file(vma); |
@@ -558,16 +556,7 @@ again: remove_next = 1 + (end > next->vm_end); | |||
558 | if (!(vma->vm_flags & VM_NONLINEAR)) | 556 | if (!(vma->vm_flags & VM_NONLINEAR)) |
559 | root = &mapping->i_mmap; | 557 | root = &mapping->i_mmap; |
560 | spin_lock(&mapping->i_mmap_lock); | 558 | spin_lock(&mapping->i_mmap_lock); |
561 | if (importer && | ||
562 | vma->vm_truncate_count != next->vm_truncate_count) { | ||
563 | /* | ||
564 | * unmap_mapping_range might be in progress: | ||
565 | * ensure that the expanding vma is rescanned. | ||
566 | */ | ||
567 | importer->vm_truncate_count = 0; | ||
568 | } | ||
569 | if (insert) { | 559 | if (insert) { |
570 | insert->vm_truncate_count = vma->vm_truncate_count; | ||
571 | /* | 560 | /* |
572 | * Put into prio_tree now, so instantiated pages | 561 | * Put into prio_tree now, so instantiated pages |
573 | * are visible to arm/parisc __flush_dcache_page | 562 | * are visible to arm/parisc __flush_dcache_page |