diff options
author | Minchan Kim <minchan@kernel.org> | 2016-12-12 19:42:08 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-12-12 21:55:07 -0500 |
commit | 4855e4a7f29d6d10b0b9c84e189c770c9a94e91e (patch) | |
tree | eb75748238b9fd7e2be6b7f2e885f0526f24796a /mm/page_alloc.c | |
parent | 88ed365ea227aa28841a8d6e196c9a261c76fffd (diff) |
mm: prevent double decrease of nr_reserved_highatomic
There is race between page freeing and unreserved highatomic.
CPU 0 CPU 1
free_hot_cold_page
mt = get_pfnblock_migratetype
set_pcppage_migratetype(page, mt)
unreserve_highatomic_pageblock
spin_lock_irqsave(&zone->lock)
move_freepages_block
set_pageblock_migratetype(page)
spin_unlock_irqrestore(&zone->lock)
free_pcppages_bulk
__free_one_page(mt) <- mt is stale
By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock severak
times and then it will make mismatch between nr_reserved_highatomic and
the number of reserved pageblock.
So, this patch verifies whether the pageblock is highatomic or not and
decrease the count only if the pageblock is highatomic.
Link: http://lkml.kernel.org/r/1476259429-18279-3-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 24 |
1 files changed, 18 insertions, 6 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 97170131f2ab..8cbc38f923aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c | |||
@@ -2085,13 +2085,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) | |||
2085 | continue; | 2085 | continue; |
2086 | 2086 | ||
2087 | /* | 2087 | /* |
2088 | * It should never happen but changes to locking could | 2088 | * In page freeing path, migratetype change is racy so |
2089 | * inadvertently allow a per-cpu drain to add pages | 2089 | * we can counter several free pages in a pageblock |
2090 | * to MIGRATE_HIGHATOMIC while unreserving so be safe | 2090 | * in this loop althoug we changed the pageblock type |
2091 | * and watch for underflows. | 2091 | * from highatomic to ac->migratetype. So we should |
2092 | * adjust the count once. | ||
2092 | */ | 2093 | */ |
2093 | zone->nr_reserved_highatomic -= min(pageblock_nr_pages, | 2094 | if (get_pageblock_migratetype(page) == |
2094 | zone->nr_reserved_highatomic); | 2095 | MIGRATE_HIGHATOMIC) { |
2096 | /* | ||
2097 | * It should never happen but changes to | ||
2098 | * locking could inadvertently allow a per-cpu | ||
2099 | * drain to add pages to MIGRATE_HIGHATOMIC | ||
2100 | * while unreserving so be safe and watch for | ||
2101 | * underflows. | ||
2102 | */ | ||
2103 | zone->nr_reserved_highatomic -= min( | ||
2104 | pageblock_nr_pages, | ||
2105 | zone->nr_reserved_highatomic); | ||
2106 | } | ||
2095 | 2107 | ||
2096 | /* | 2108 | /* |
2097 | * Convert to ac->migratetype and avoid the normal | 2109 | * Convert to ac->migratetype and avoid the normal |