aboutsummaryrefslogtreecommitdiffstats
path: root/mm/oom_kill.c
diff options
context:
space:
mode:
authorPaul Jackson <pj@sgi.com>2005-09-06 18:18:09 -0400
committerLinus Torvalds <torvalds@g5.osdl.org>2005-09-07 19:57:39 -0400
commita49335cceab8afb6603152fcc3f7d3b6677366ca (patch)
tree83f8c06d781a6de77f0b34ec14577bba1e410ac6 /mm/oom_kill.c
parentf68f447e8389de9a62e3e80c3c5823cce484c2e5 (diff)
[PATCH] cpusets: oom_kill tweaks
This patch series extends the use of the cpuset attribute 'mem_exclusive' to support cpuset configurations that: 1) allow GFP_KERNEL allocations to come from a potentially larger set of memory nodes than GFP_USER allocations, and 2) can constrain the oom killer to tasks running in cpusets in a specified subtree of the cpuset hierarchy. Here's an example usage scenario. For a few hours or more, a large NUMA system at a University is to be divided in two halves, with a bunch of student jobs running in half the system under some form of batch manager, and with a big research project running in the other half. Each of the student jobs is placed in a small cpuset, but should share the classic Unix time share facilities, such as buffered pages of files in /bin and /usr/lib. The big research project wants no interference whatsoever from the student jobs, and has highly tuned, unusual memory and i/o patterns that intend to make full use of all the main memory on the nodes available to it. In this example, we have two big sibling cpusets, one of which is further divided into a more dynamic set of child cpusets. We want kernel memory allocations constrained by the two big cpusets, and user allocations constrained by the smaller child cpusets where present. And we require that the oom killer not operate across the two halves of this system, or else the first time a student job runs amuck, the big research project will likely be first inline to get shot. Tweaking /proc/<pid>/oom_adj is not ideal -- if the big research project really does run amuck allocating memory, it should be shot, not some other task outside the research projects mem_exclusive cpuset. I propose to extend the use of the 'mem_exclusive' flag of cpusets to manage such scenarios. Let memory allocations for user space (GFP_USER) be constrained by a tasks current cpuset, but memory allocations for kernel space (GFP_KERNEL) by constrained by the nearest mem_exclusive ancestor of the current cpuset, even though kernel space allocations will still _prefer_ to remain within the current tasks cpuset, if memory is easily available. Let the oom killer be constrained to consider only tasks that are in overlapping mem_exclusive cpusets (it won't help much to kill a task that normally cannot allocate memory on any of the same nodes as the ones on which the current task can allocate.) The current constraints imposed on setting mem_exclusive are unchanged. A cpuset may only be mem_exclusive if its parent is also mem_exclusive, and a mem_exclusive cpuset may not overlap any of its siblings memory nodes. This patch was presented on linux-mm in early July 2005, though did not generate much feedback at that time. It has been built for a variety of arch's using cross tools, and built, booted and tested for function on SN2 (ia64). There are 4 patches in this set: 1) Some minor cleanup, and some improvements to the code layout of one routine to make subsequent patches cleaner. 2) Add another GFP flag - __GFP_HARDWALL. It marks memory requests for USER space, which are tightly confined by the current tasks cpuset. 3) Now memory requests (such as KERNEL) that not marked HARDWALL can if short on memory, look in the potentially larger pool of memory defined by the nearest mem_exclusive ancestor cpuset of the current tasks cpuset. 4) Finally, modify the oom killer to skip any task whose mem_exclusive cpuset doesn't overlap ours. Patch (1), the one time I looked on an SN2 (ia64) build, actually saved 32 bytes of kernel text space. Patch (2) has no affect on the size of kernel text space (it just adds a preprocessor flag). Patches (3) and (4) added about 600 bytes each of kernel text space, mostly in kernel/cpuset.c, which matters only if CONFIG_CPUSET is enabled. This patch: This patch applies a few comment and code cleanups to mm/oom_kill.c prior to applying a few small patches to improve cpuset management of memory placement. The comment changed in oom_kill.c was seriously misleading. The code layout change in select_bad_process() makes room for adding another condition on which a process can be spared the oom killer (see the subsequent cpuset_nodes_overlap patch for this addition). Also a couple typos and spellos that bugged me, while I was here. This patch should have no material affect. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/oom_kill.c')
-rw-r--r--mm/oom_kill.c57
1 files changed, 31 insertions, 26 deletions
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 1e56076672f5..3a1d46502938 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -6,8 +6,8 @@
6 * for goading me into coding this file... 6 * for goading me into coding this file...
7 * 7 *
8 * The routines in this file are used to kill a process when 8 * The routines in this file are used to kill a process when
9 * we're seriously out of memory. This gets called from kswapd() 9 * we're seriously out of memory. This gets called from __alloc_pages()
10 * in linux/mm/vmscan.c when we really run out of memory. 10 * in mm/page_alloc.c when we really run out of memory.
11 * 11 *
12 * Since we won't call these routines often (on a well-configured 12 * Since we won't call these routines often (on a well-configured
13 * machine) this file will double as a 'coding guide' and a signpost 13 * machine) this file will double as a 'coding guide' and a signpost
@@ -26,7 +26,7 @@
26/** 26/**
27 * oom_badness - calculate a numeric value for how bad this task has been 27 * oom_badness - calculate a numeric value for how bad this task has been
28 * @p: task struct of which task we should calculate 28 * @p: task struct of which task we should calculate
29 * @p: current uptime in seconds 29 * @uptime: current uptime in seconds
30 * 30 *
31 * The formula used is relatively simple and documented inline in the 31 * The formula used is relatively simple and documented inline in the
32 * function. The main rationale is that we want to select a good task 32 * function. The main rationale is that we want to select a good task
@@ -57,9 +57,9 @@ unsigned long badness(struct task_struct *p, unsigned long uptime)
57 57
58 /* 58 /*
59 * Processes which fork a lot of child processes are likely 59 * Processes which fork a lot of child processes are likely
60 * a good choice. We add the vmsize of the childs if they 60 * a good choice. We add the vmsize of the children if they
61 * have an own mm. This prevents forking servers to flood the 61 * have an own mm. This prevents forking servers to flood the
62 * machine with an endless amount of childs 62 * machine with an endless amount of children
63 */ 63 */
64 list_for_each(tsk, &p->children) { 64 list_for_each(tsk, &p->children) {
65 struct task_struct *chld; 65 struct task_struct *chld;
@@ -143,28 +143,32 @@ static struct task_struct * select_bad_process(void)
143 struct timespec uptime; 143 struct timespec uptime;
144 144
145 do_posix_clock_monotonic_gettime(&uptime); 145 do_posix_clock_monotonic_gettime(&uptime);
146 do_each_thread(g, p) 146 do_each_thread(g, p) {
147 unsigned long points;
148 int releasing;
149
147 /* skip the init task with pid == 1 */ 150 /* skip the init task with pid == 1 */
148 if (p->pid > 1 && p->oomkilladj != OOM_DISABLE) { 151 if (p->pid == 1)
149 unsigned long points; 152 continue;
150 153 if (p->oomkilladj == OOM_DISABLE)
151 /* 154 continue;
152 * This is in the process of releasing memory so wait it 155 /*
153 * to finish before killing some other task by mistake. 156 * This is in the process of releasing memory so for wait it
154 */ 157 * to finish before killing some other task by mistake.
155 if ((unlikely(test_tsk_thread_flag(p, TIF_MEMDIE)) || (p->flags & PF_EXITING)) && 158 */
156 !(p->flags & PF_DEAD)) 159 releasing = test_tsk_thread_flag(p, TIF_MEMDIE) ||
157 return ERR_PTR(-1UL); 160 p->flags & PF_EXITING;
158 if (p->flags & PF_SWAPOFF) 161 if (releasing && !(p->flags & PF_DEAD))
159 return p; 162 return ERR_PTR(-1UL);
160 163 if (p->flags & PF_SWAPOFF)
161 points = badness(p, uptime.tv_sec); 164 return p;
162 if (points > maxpoints || !chosen) { 165
163 chosen = p; 166 points = badness(p, uptime.tv_sec);
164 maxpoints = points; 167 if (points > maxpoints || !chosen) {
165 } 168 chosen = p;
169 maxpoints = points;
166 } 170 }
167 while_each_thread(g, p); 171 } while_each_thread(g, p);
168 return chosen; 172 return chosen;
169} 173}
170 174
@@ -189,7 +193,8 @@ static void __oom_kill_task(task_t *p)
189 return; 193 return;
190 } 194 }
191 task_unlock(p); 195 task_unlock(p);
192 printk(KERN_ERR "Out of Memory: Killed process %d (%s).\n", p->pid, p->comm); 196 printk(KERN_ERR "Out of Memory: Killed process %d (%s).\n",
197 p->pid, p->comm);
193 198
194 /* 199 /*
195 * We give our sacrificial lamb high priority and access to 200 * We give our sacrificial lamb high priority and access to