summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorMel Gorman <mgorman@techsingularity.net>2015-11-06 19:28:09 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-06 20:50:42 -0500
commite2b19197ff9dc46f3e3888f273c4395f9e5a9856 (patch)
treee7838d95e5ea74d27560fcbc029d45b973a7989a /mm/page_alloc.c
parentdb2a0dd7a43de595d3f0542986bb17ccb6cc364c (diff)
mm, page_alloc: remove unnecessary parameter from zone_watermark_ok_safe
Overall, the intent of this series is to remove the zonelist cache which was introduced to avoid high overhead in the page allocator. Once this is done, it is necessary to reduce the cost of watermark checks. The series starts with minor micro-optimisations. Next it notes that GFP flags that affect watermark checks are abused. __GFP_WAIT historically identified callers that could not sleep and could access reserves. This was later abused to identify callers that simply prefer to avoid sleeping and have other options. A patch distinguishes between atomic callers, high-priority callers and those that simply wish to avoid sleep. The zonelist cache has been around for a long time but it is of dubious merit with a lot of complexity and some issues that are explained. The most important issue is that a failed THP allocation can cause a zone to be treated as "full". This potentially causes unnecessary stalls, reclaim activity or remote fallbacks. The issues could be fixed but it's not worth it. The series places a small number of other micro-optimisations on top before examining GFP flags watermarks. High-order watermarks enforcement can cause high-order allocations to fail even though pages are free. The watermark checks both protect high-order atomic allocations and make kswapd aware of high-order pages but there is a much better way that can be handled using migrate types. This series uses page grouping by mobility to reserve pageblocks for high-order allocations with the size of the reservation depending on demand. kswapd awareness is maintained by examining the free lists. By patch 12 in this series, there are no high-order watermark checks while preserving the properties that motivated the introduction of the watermark checks. This patch (of 10): No user of zone_watermark_ok_safe() specifies alloc_flags. This patch removes the unnecessary parameter. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: David Rientjes <rientjes@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Christoph Lameter <cl@linux.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 446bb36ee59d..d73c346d91b3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2249,6 +2249,7 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order,
2249 min -= min / 2; 2249 min -= min / 2;
2250 if (alloc_flags & ALLOC_HARDER) 2250 if (alloc_flags & ALLOC_HARDER)
2251 min -= min / 4; 2251 min -= min / 4;
2252
2252#ifdef CONFIG_CMA 2253#ifdef CONFIG_CMA
2253 /* If allocation can't use CMA areas don't use free CMA pages */ 2254 /* If allocation can't use CMA areas don't use free CMA pages */
2254 if (!(alloc_flags & ALLOC_CMA)) 2255 if (!(alloc_flags & ALLOC_CMA))
@@ -2278,14 +2279,14 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
2278} 2279}
2279 2280
2280bool zone_watermark_ok_safe(struct zone *z, unsigned int order, 2281bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
2281 unsigned long mark, int classzone_idx, int alloc_flags) 2282 unsigned long mark, int classzone_idx)
2282{ 2283{
2283 long free_pages = zone_page_state(z, NR_FREE_PAGES); 2284 long free_pages = zone_page_state(z, NR_FREE_PAGES);
2284 2285
2285 if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark) 2286 if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark)
2286 free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES); 2287 free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
2287 2288
2288 return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, 2289 return __zone_watermark_ok(z, order, mark, classzone_idx, 0,
2289 free_pages); 2290 free_pages);
2290} 2291}
2291 2292