aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2017-01-24 18:18:35 -0500
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2017-02-01 02:33:04 -0500
commitd1656c5aef4d72f03a7833d07a378c8f604b8307 (patch)
tree97129643cd0e68b752f7d9d0cc067d19c88020e3 /mm
parentade7afe9dca6b13919f88abd38eefe32f22eaeb3 (diff)
mm, page_alloc: fix fast-path race with cpuset update or removal
commit 16096c25bf0ca5d87e4fa6ec6108ba53feead212 upstream. Ganapatrao Kulkarni reported that the LTP test cpuset01 in stress mode triggers OOM killer in few seconds, despite lots of free memory. The test attempts to repeatedly fault in memory in one process in a cpuset, while changing allowed nodes of the cpuset between 0 and 1 in another process. One possible cause is that in the fast path we find the preferred zoneref according to current mems_allowed, so that it points to the middle of the zonelist, skipping e.g. zones of node 1 completely. If the mems_allowed is updated to contain only node 1, we never reach it in the zonelist, and trigger OOM before checking the cpuset_mems_cookie. This patch fixes the particular case by redoing the preferred zoneref search if we switch back to the original nodemask. The condition is also slightly changed so that when the last non-root cpuset is removed, we don't miss it. Note that this is not a full fix, and more patches will follow. Link: http://lkml.kernel.org/r/20170120103843.24587-3-vbabka@suse.cz Fixes: 682a3385e773 ("mm, page_alloc: inline the fast path of the zonelist iterator") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: Ganapatrao Kulkarni <gpkulkarni@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_alloc.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 593a11d8bc6b..dedadb4a779f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3783,9 +3783,17 @@ retry_cpuset:
3783 /* 3783 /*
3784 * Restore the original nodemask if it was potentially replaced with 3784 * Restore the original nodemask if it was potentially replaced with
3785 * &cpuset_current_mems_allowed to optimize the fast-path attempt. 3785 * &cpuset_current_mems_allowed to optimize the fast-path attempt.
3786 * Also recalculate the starting point for the zonelist iterator or
3787 * we could end up iterating over non-eligible zones endlessly.
3786 */ 3788 */
3787 if (cpusets_enabled()) 3789 if (unlikely(ac.nodemask != nodemask)) {
3788 ac.nodemask = nodemask; 3790 ac.nodemask = nodemask;
3791 ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
3792 ac.high_zoneidx, ac.nodemask);
3793 if (!ac.preferred_zoneref->zone)
3794 goto no_zone;
3795 }
3796
3789 page = __alloc_pages_slowpath(alloc_mask, order, &ac); 3797 page = __alloc_pages_slowpath(alloc_mask, order, &ac);
3790 3798
3791no_zone: 3799no_zone: