aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2013-12-18 20:08:47 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2013-12-18 22:04:51 -0500
commit73f038b863dfe98acabc7c36c17342b84ad52e94 (patch)
tree4cd2132b22b48308bf3676f80fdce1f7691de06b /mm
parentb0943d61b8fa420180f92f64ef67662b4f6cc493 (diff)
mm: page_alloc: exclude unreclaimable allocations from zone fairness policy
Dave Hansen noted a regression in a microbenchmark that loops around open() and close() on an 8-node NUMA machine and bisected it down to commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy"). That change forces the slab allocations of the file descriptor to spread out to all 8 nodes, causing remote references in the page allocator and slab. The round-robin policy is only there to provide fairness among memory allocations that are reclaimed involuntarily based on pressure in each zone. It does not make sense to apply it to unreclaimable kernel allocations that are freed manually, in this case instantly after the allocation, and incur the remote reference costs twice for no reason. Only round-robin allocations that are usually freed through page reclaim or slab shrinking. Bisected by Dave Hansen. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_alloc.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 580a5f075ed0..f861d0257e90 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1920,7 +1920,8 @@ zonelist_scan:
1920 * back to remote zones that do not partake in the 1920 * back to remote zones that do not partake in the
1921 * fairness round-robin cycle of this zonelist. 1921 * fairness round-robin cycle of this zonelist.
1922 */ 1922 */
1923 if (alloc_flags & ALLOC_WMARK_LOW) { 1923 if ((alloc_flags & ALLOC_WMARK_LOW) &&
1924 (gfp_mask & GFP_MOVABLE_MASK)) {
1924 if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) 1925 if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
1925 continue; 1926 continue;
1926 if (zone_reclaim_mode && 1927 if (zone_reclaim_mode &&