diff options
author | Michal Hocko <mhocko@suse.cz> | 2015-02-11 18:26:24 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-02-11 20:06:03 -0500 |
commit | c32b3cbe0d067a9cfae85aa70ba1e97ceba0ced7 (patch) | |
tree | ea807199ce92eed21239e5279033dbeb83b9dde1 /mm/page_alloc.c | |
parent | 401e4a7cf67d993bae02efdf1a234d7e2dbd2df2 (diff) |
oom, PM: make OOM detection in the freezer path raceless
Commit 5695be142e20 ("OOM, PM: OOM killed task shouldn't escape PM
suspend") has left a race window when OOM killer manages to
note_oom_kill after freeze_processes checks the counter. The race
window is quite small and really unlikely and partial solution deemed
sufficient at the time of submission.
Tejun wasn't happy about this partial solution though and insisted on a
full solution. That requires the full OOM and freezer's task freezing
exclusion, though. This is done by this patch which introduces oom_sem
RW lock and turns oom_killer_disable() into a full OOM barrier.
oom_killer_disabled check is moved from the allocation path to the OOM
level and we take oom_sem for reading for both the check and the whole
OOM invocation.
oom_killer_disable() takes oom_sem for writing so it waits for all
currently running OOM killer invocations. Then it disable all the further
OOMs by setting oom_killer_disabled and checks for any oom victims.
Victims are counted via mark_tsk_oom_victim resp. unmark_oom_victim. The
last victim wakes up all waiters enqueued by oom_killer_disable().
Therefore this function acts as the full OOM barrier.
The page fault path is covered now as well although it was assumed to be
safe before. As per Tejun, "We used to have freezing points deep in file
system code which may be reacheable from page fault." so it would be
better and more robust to not rely on freezing points here. Same applies
to the memcg OOM killer.
out_of_memory tells the caller whether the OOM was allowed to trigger and
the callers are supposed to handle the situation. The page allocation
path simply fails the allocation same as before. The page fault path will
retry the fault (more on that later) and Sysrq OOM trigger will simply
complain to the log.
Normally there wouldn't be any unfrozen user tasks after
try_to_freeze_tasks so the function will not block. But if there was an
OOM killer racing with try_to_freeze_tasks and the OOM victim didn't
finish yet then we have to wait for it. This should complete in a finite
time, though, because
- the victim cannot loop in the page fault handler (it would die
on the way out from the exception)
- it cannot loop in the page allocator because all the further
allocation would fail and __GFP_NOFAIL allocations are not
acceptable at this stage
- it shouldn't be blocked on any locks held by frozen tasks
(try_to_freeze expects lockless context) and kernel threads and
work queues are not frozen yet
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Suggested-by: Tejun Heo <tj@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 17 |
1 files changed, 2 insertions, 15 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 641d5a9a8617..134e25525044 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c | |||
@@ -244,8 +244,6 @@ void set_pageblock_migratetype(struct page *page, int migratetype) | |||
244 | PB_migrate, PB_migrate_end); | 244 | PB_migrate, PB_migrate_end); |
245 | } | 245 | } |
246 | 246 | ||
247 | bool oom_killer_disabled __read_mostly; | ||
248 | |||
249 | #ifdef CONFIG_DEBUG_VM | 247 | #ifdef CONFIG_DEBUG_VM |
250 | static int page_outside_zone_boundaries(struct zone *zone, struct page *page) | 248 | static int page_outside_zone_boundaries(struct zone *zone, struct page *page) |
251 | { | 249 | { |
@@ -2317,9 +2315,6 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, | |||
2317 | 2315 | ||
2318 | *did_some_progress = 0; | 2316 | *did_some_progress = 0; |
2319 | 2317 | ||
2320 | if (oom_killer_disabled) | ||
2321 | return NULL; | ||
2322 | |||
2323 | /* | 2318 | /* |
2324 | * Acquire the per-zone oom lock for each zone. If that | 2319 | * Acquire the per-zone oom lock for each zone. If that |
2325 | * fails, somebody else is making progress for us. | 2320 | * fails, somebody else is making progress for us. |
@@ -2331,14 +2326,6 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, | |||
2331 | } | 2326 | } |
2332 | 2327 | ||
2333 | /* | 2328 | /* |
2334 | * PM-freezer should be notified that there might be an OOM killer on | ||
2335 | * its way to kill and wake somebody up. This is too early and we might | ||
2336 | * end up not killing anything but false positives are acceptable. | ||
2337 | * See freeze_processes. | ||
2338 | */ | ||
2339 | note_oom_kill(); | ||
2340 | |||
2341 | /* | ||
2342 | * Go through the zonelist yet one more time, keep very high watermark | 2329 | * Go through the zonelist yet one more time, keep very high watermark |
2343 | * here, this is only to catch a parallel oom killing, we must fail if | 2330 | * here, this is only to catch a parallel oom killing, we must fail if |
2344 | * we're still under heavy pressure. | 2331 | * we're still under heavy pressure. |
@@ -2372,8 +2359,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, | |||
2372 | goto out; | 2359 | goto out; |
2373 | } | 2360 | } |
2374 | /* Exhausted what can be done so it's blamo time */ | 2361 | /* Exhausted what can be done so it's blamo time */ |
2375 | out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false); | 2362 | if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) |
2376 | *did_some_progress = 1; | 2363 | *did_some_progress = 1; |
2377 | out: | 2364 | out: |
2378 | oom_zonelist_unlock(ac->zonelist, gfp_mask); | 2365 | oom_zonelist_unlock(ac->zonelist, gfp_mask); |
2379 | return page; | 2366 | return page; |