aboutsummaryrefslogtreecommitdiffstats
path: root/mm/hugetlb.c
diff options
context:
space:
mode:
authorKen Chen <kenchen@google.com>2007-05-09 05:33:09 -0400
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-05-09 15:30:48 -0400
commitace4bd29c248b51db3f8a97e9b59740dc6caa074 (patch)
tree474d6b74fcd4ab07262d624a278faf65e14ddc4d /mm/hugetlb.c
parentba8b45cea5f632540d561d37d94c71c07f6af1aa (diff)
fix leaky resv_huge_pages when cpuset is in use
The internal hugetlb resv_huge_pages variable can permanently leak nonzero value in the error path of hugetlb page fault handler when hugetlb page is used in combination of cpuset. The leaked count can permanently trap N number of hugetlb pages in unusable "reserved" state. Steps to reproduce the bug: (1) create two cpuset, user1 and user2 (2) reserve 50 htlb pages in cpuset user1 (3) attempt to shmget/shmat 50 htlb page inside cpuset user2 (4) kernel oom the user process in step 3 (5) ipcrm the shm segment At this point resv_huge_pages will have a count of 49, even though there are no active hugetlbfs file nor hugetlb shared memory segment in the system. The leak is permanent and there is no recovery method other than system reboot. The leaked count will hold up all future use of that many htlb pages in all cpusets. The culprit is that the error path of alloc_huge_page() did not properly undo the change it made to resv_huge_page, causing inconsistent state. Signed-off-by: Ken Chen <kenchen@google.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Adam Litke <agl@us.ibm.com> Cc: Martin Bligh <mbligh@google.com> Acked-by: David Gibson <dwg@au1.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r--mm/hugetlb.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 36db012b38dd..88e708be1f64 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -140,6 +140,8 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
140 return page; 140 return page;
141 141
142fail: 142fail:
143 if (vma->vm_flags & VM_MAYSHARE)
144 resv_huge_pages++;
143 spin_unlock(&hugetlb_lock); 145 spin_unlock(&hugetlb_lock);
144 return NULL; 146 return NULL;
145} 147}