diff options
author | Michal Hocko <mhocko@suse.cz> | 2014-10-20 12:12:32 -0400 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2014-10-21 17:44:21 -0400 |
commit | 5695be142e203167e3cb515ef86a88424f3524eb (patch) | |
tree | 2c2b21d91658f23b3290db56f726959863ee8248 /mm/oom_kill.c | |
parent | c05eb32f472fb9f7f474c20ff6fa5bfe0cbedc05 (diff) |
OOM, PM: OOM killed task shouldn't escape PM suspend
PM freezer relies on having all tasks frozen by the time devices are
getting frozen so that no task will touch them while they are getting
frozen. But OOM killer is allowed to kill an already frozen task in
order to handle OOM situtation. In order to protect from late wake ups
OOM killer is disabled after all tasks are frozen. This, however, still
keeps a window open when a killed task didn't manage to die by the time
freeze_processes finishes.
Reduce the race window by checking all tasks after OOM killer has been
disabled. This is still not race free completely unfortunately because
oom_killer_disable cannot stop an already ongoing OOM killer so a task
might still wake up from the fridge and get killed without
freeze_processes noticing. Full synchronization of OOM and freezer is,
however, too heavy weight for this highly unlikely case.
Introduce and check oom_kills counter which gets incremented early when
the allocator enters __alloc_pages_may_oom path and only check all the
tasks if the counter changes during the freezing attempt. The counter
is updated so early to reduce the race window since allocator checked
oom_killer_disabled which is set by PM-freezing code. A false positive
will push the PM-freezer into a slow path but that is not a big deal.
Changes since v1
- push the re-check loop out of freeze_processes into
check_frozen_processes and invert the condition to make the code more
readable as per Rafael
Fixes: f660daac474c6f (oom: thaw threads if oom killed thread is frozen before deferring)
Cc: 3.2+ <stable@vger.kernel.org> # 3.2+
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Diffstat (limited to 'mm/oom_kill.c')
-rw-r--r-- | mm/oom_kill.c | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/mm/oom_kill.c b/mm/oom_kill.c index bbf405a3a18f..5340f6b91312 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c | |||
@@ -404,6 +404,23 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order, | |||
404 | dump_tasks(memcg, nodemask); | 404 | dump_tasks(memcg, nodemask); |
405 | } | 405 | } |
406 | 406 | ||
407 | /* | ||
408 | * Number of OOM killer invocations (including memcg OOM killer). | ||
409 | * Primarily used by PM freezer to check for potential races with | ||
410 | * OOM killed frozen task. | ||
411 | */ | ||
412 | static atomic_t oom_kills = ATOMIC_INIT(0); | ||
413 | |||
414 | int oom_kills_count(void) | ||
415 | { | ||
416 | return atomic_read(&oom_kills); | ||
417 | } | ||
418 | |||
419 | void note_oom_kill(void) | ||
420 | { | ||
421 | atomic_inc(&oom_kills); | ||
422 | } | ||
423 | |||
407 | #define K(x) ((x) << (PAGE_SHIFT-10)) | 424 | #define K(x) ((x) << (PAGE_SHIFT-10)) |
408 | /* | 425 | /* |
409 | * Must be called while holding a reference to p, which will be released upon | 426 | * Must be called while holding a reference to p, which will be released upon |