aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorMel Gorman <mel@csn.ul.ie>2010-05-24 17:32:32 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2010-05-25 11:07:00 -0400
commit4f92e2586b43a2402e116055d4edda704f911b5b (patch)
tree6a765ebeba951c02a7878bcea52a4769ad2e45c2 /mm
parent5e7719058079a1423ccce56148b0aaa56b2df821 (diff)
mm: compaction: defer compaction using an exponential backoff when compaction fails
The fragmentation index may indicate that a failure is due to external fragmentation but after a compaction run completes, it is still possible for an allocation to fail. There are two obvious reasons as to why o Page migration cannot move all pages so fragmentation remains o A suitable page may exist but watermarks are not met In the event of compaction followed by an allocation failure, this patch defers further compaction in the zone (1 << compact_defer_shift) times. If the next compaction attempt also fails, compact_defer_shift is increased up to a maximum of 6. If compaction succeeds, the defer counters are reset again. The zone that is deferred is the first zone in the zonelist - i.e. the preferred zone. To defer compaction in the other zones, the information would need to be stored in the zonelist or implemented similar to the zonelist_cache. This would impact the fast-paths and is not justified at this time. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Cc: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_alloc.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cd88a860f088..95ad42de5a87 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1769,7 +1769,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
1769{ 1769{
1770 struct page *page; 1770 struct page *page;
1771 1771
1772 if (!order) 1772 if (!order || compaction_deferred(preferred_zone))
1773 return NULL; 1773 return NULL;
1774 1774
1775 *did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask, 1775 *did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
@@ -1785,6 +1785,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
1785 alloc_flags, preferred_zone, 1785 alloc_flags, preferred_zone,
1786 migratetype); 1786 migratetype);
1787 if (page) { 1787 if (page) {
1788 preferred_zone->compact_considered = 0;
1789 preferred_zone->compact_defer_shift = 0;
1788 count_vm_event(COMPACTSUCCESS); 1790 count_vm_event(COMPACTSUCCESS);
1789 return page; 1791 return page;
1790 } 1792 }
@@ -1795,6 +1797,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
1795 * but not enough to satisfy watermarks. 1797 * but not enough to satisfy watermarks.
1796 */ 1798 */
1797 count_vm_event(COMPACTFAIL); 1799 count_vm_event(COMPACTFAIL);
1800 defer_compaction(preferred_zone);
1798 1801
1799 cond_resched(); 1802 cond_resched();
1800 } 1803 }