diff options
author | Nishanth Aravamudan <nacc@us.ibm.com> | 2008-04-29 03:58:26 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-04-29 11:05:58 -0400 |
commit | 551883ae8c9c31460e796e7b1b8aa9069de268b4 (patch) | |
tree | 2f2aaef8024656b3b5e1cf9e599608e54d2e483e /mm/hugetlb.c | |
parent | a41f24ea9fd6169b147c53c2392e2887cc1d9247 (diff) |
page allocator: explicitly retry hugepage allocations
Add __GFP_REPEAT to hugepage allocations. Do so to not necessitate userspace
putting pressure on the VM by repeated echo's into /proc/sys/vm/nr_hugepages
to grow the pool. With the previous patch to allow for large-order
__GFP_REPEAT attempts to loop for a bit (as opposed to indefinitely), this
increases the likelihood of getting hugepages when the system experiences (or
recently experienced) load.
Mel tested the patchset on an x86_32 laptop. With the patches, it was easier
to use the proc interface to grow the hugepage pool. The following is the
output of a script that grows the pool as much as possible running on
2.6.25-rc9.
Allocating hugepages test
-------------------------
Disabling OOM Killer for current test process
Starting page count: 0
Attempt 1: 57 pages Progress made with 57 pages
Attempt 2: 73 pages Progress made with 16 pages
Attempt 3: 74 pages Progress made with 1 pages
Attempt 4: 75 pages Progress made with 1 pages
Attempt 5: 77 pages Progress made with 2 pages
77 pages was the most it allocated but it took 5 attempts from userspace
to get it. With the 3 patches in this series applied,
Allocating hugepages test
-------------------------
Disabling OOM Killer for current test process
Starting page count: 0
Attempt 1: 75 pages Progress made with 75 pages
Attempt 2: 76 pages Progress made with 1 pages
Attempt 3: 79 pages Progress made with 3 pages
And 79 pages was the most it got. Your patches were able to allocate the
bulk of possible pages on the first attempt.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Tested-by: Mel Gorman <mel@csn.ul.ie>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r-- | mm/hugetlb.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2c37c67ed8c9..bbf953eeb58b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c | |||
@@ -199,7 +199,8 @@ static struct page *alloc_fresh_huge_page_node(int nid) | |||
199 | struct page *page; | 199 | struct page *page; |
200 | 200 | ||
201 | page = alloc_pages_node(nid, | 201 | page = alloc_pages_node(nid, |
202 | htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE|__GFP_NOWARN, | 202 | htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE| |
203 | __GFP_REPEAT|__GFP_NOWARN, | ||
203 | HUGETLB_PAGE_ORDER); | 204 | HUGETLB_PAGE_ORDER); |
204 | if (page) { | 205 | if (page) { |
205 | if (arch_prepare_hugepage(page)) { | 206 | if (arch_prepare_hugepage(page)) { |
@@ -294,7 +295,8 @@ static struct page *alloc_buddy_huge_page(struct vm_area_struct *vma, | |||
294 | } | 295 | } |
295 | spin_unlock(&hugetlb_lock); | 296 | spin_unlock(&hugetlb_lock); |
296 | 297 | ||
297 | page = alloc_pages(htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN, | 298 | page = alloc_pages(htlb_alloc_mask|__GFP_COMP| |
299 | __GFP_REPEAT|__GFP_NOWARN, | ||
298 | HUGETLB_PAGE_ORDER); | 300 | HUGETLB_PAGE_ORDER); |
299 | 301 | ||
300 | spin_lock(&hugetlb_lock); | 302 | spin_lock(&hugetlb_lock); |