aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorMichal Hocko <mhocko@suse.cz>2012-10-08 19:33:31 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2012-10-09 03:22:57 -0400
commit36e4f20af833d1ce196e6a4ade05dc26c44652d1 (patch)
tree122f06a7e2f54e782d4eb765f48217a5d0333226 /mm
parent027ef6c87853b0a9df53175063028edb4950d476 (diff)
hugetlb: do not use vma_hugecache_offset() for vma_prio_tree_foreach
Commit 0c176d52b0b2 ("mm: hugetlb: fix pgoff computation when unmapping page from vma") fixed pgoff calculation but it has replaced it by vma_hugecache_offset() which is not approapriate for offsets used for vma_prio_tree_foreach() because that one expects index in page units rather than in huge_page_shift. Johannes said: : The resulting index may not be too big, but it can be too small: assume : hpage size of 2M and the address to unmap to be 0x200000. This is regular : page index 512 and hpage index 1. If you have a VMA that maps the file : only starting at the second huge page, that VMAs vm_pgoff will be 512 but : you ask for offset 1 and miss it even though it does map the page of : interest. hugetlb_cow() will try to unmap, miss the vma, and retry the : cow until the allocation succeeds or the skipped vma(s) go away. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: Hillf Danton <dhillf@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/hugetlb.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8536741f069b..de5d1dcf34fe 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2480,7 +2480,8 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
2480 * from page cache lookup which is in HPAGE_SIZE units. 2480 * from page cache lookup which is in HPAGE_SIZE units.
2481 */ 2481 */
2482 address = address & huge_page_mask(h); 2482 address = address & huge_page_mask(h);
2483 pgoff = vma_hugecache_offset(h, vma, address); 2483 pgoff = ((address - vma->vm_start) >> PAGE_SHIFT) +
2484 vma->vm_pgoff;
2484 mapping = vma->vm_file->f_dentry->d_inode->i_mapping; 2485 mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
2485 2486
2486 /* 2487 /*