diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2008-02-11 23:30:22 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2008-02-11 23:48:29 -0500 |
commit | 31f1de46b90ad360a16e7af3e277d104961df923 (patch) | |
tree | a54e8698d4e4d088c4008e0ae91b579b13d2c208 /include | |
parent | 1a510089849ff9f70b654659bf976a6baf3a4833 (diff) |
mempolicy: silently restrict nodemask to allowed nodes
Kosaki Motohito noted that "numactl --interleave=all ..." failed in the
presence of memoryless nodes. This patch attempts to fix that problem.
Some background:
numactl --interleave=all calls set_mempolicy(2) with a fully populated
[out to MAXNUMNODES] nodemask. set_mempolicy() [in do_set_mempolicy()]
calls contextualize_policy() which requires that the nodemask be a
subset of the current task's mems_allowed; else EINVAL will be returned.
A task's mems_allowed will always be a subset of node_states[N_HIGH_MEMORY]
i.e., nodes with memory. So, a fully populated nodemask will be
declared invalid if it includes memoryless nodes.
NOTE: the same thing will occur when running in a cpuset
with restricted mem_allowed--for the same reason:
node mask contains dis-allowed nodes.
mbind(2), on the other hand, just masks off any nodes in the nodemask
that are not included in the caller's mems_allowed.
In each case [mbind() and set_mempolicy()], mpol_check_policy() will
complain [again, resulting in EINVAL] if the nodemask contains any
memoryless nodes. This is somewhat redundant as mpol_new() will remove
memoryless nodes for interleave policy, as will bind_zonelist()--called
by mpol_new() for BIND policy.
Proposed fix:
1) modify contextualize_policy logic to:
a) remember whether the incoming node mask is empty.
b) if not, restrict the nodemask to allowed nodes, as is
currently done in-line for mbind(). This guarantees
that the resulting mask includes only nodes with memory.
NOTE: this is a [benign, IMO] change in behavior for
set_mempolicy(). Dis-allowed nodes will be
silently ignored, rather than returning an error.
c) fold this code into mpol_check_policy(), replace 2 calls to
contextualize_policy() to call mpol_check_policy() directly
and remove contextualize_policy().
2) In existing mpol_check_policy() logic, after "contextualization":
a) MPOL_DEFAULT: require that in coming mask "was_empty"
b) MPOL_{BIND|INTERLEAVE}: require that contextualized nodemask
contains at least one node.
c) add a case for MPOL_PREFERRED: if in coming was not empty
and resulting mask IS empty, user specified invalid nodes.
Return EINVAL.
c) remove the now redundant check for memoryless nodes
3) remove the now redundant masking of policy nodes for interleave
policy from mpol_new().
4) Now that mpol_check_policy() contextualizes the nodemask, remove
the in-line nodes_and() from sys_mbind(). I believe that this
restores mbind() to the behavior before the memoryless-nodes
patch series. E.g., we'll no longer treat an invalid nodemask
with MPOL_PREFERRED as local allocation.
[ Patch history:
v1 -> v2:
- Communicate whether or not incoming node mask was empty to
mpol_check_policy() for better error checking.
- As suggested by David Rientjes, remove the now unused
cpuset_nodes_subset_current_mems_allowed() from cpuset.h
v2 -> v3:
- As suggested by Kosaki Motohito, fold the "contextualization"
of policy nodemask into mpol_check_policy(). Looks a little
cleaner. ]
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/cpuset.h | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index f8c9a2752f06..0a26be353cb3 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h | |||
@@ -26,8 +26,6 @@ extern nodemask_t cpuset_mems_allowed(struct task_struct *p); | |||
26 | #define cpuset_current_mems_allowed (current->mems_allowed) | 26 | #define cpuset_current_mems_allowed (current->mems_allowed) |
27 | void cpuset_init_current_mems_allowed(void); | 27 | void cpuset_init_current_mems_allowed(void); |
28 | void cpuset_update_task_memory_state(void); | 28 | void cpuset_update_task_memory_state(void); |
29 | #define cpuset_nodes_subset_current_mems_allowed(nodes) \ | ||
30 | nodes_subset((nodes), current->mems_allowed) | ||
31 | int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl); | 29 | int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl); |
32 | 30 | ||
33 | extern int __cpuset_zone_allowed_softwall(struct zone *z, gfp_t gfp_mask); | 31 | extern int __cpuset_zone_allowed_softwall(struct zone *z, gfp_t gfp_mask); |
@@ -103,7 +101,6 @@ static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) | |||
103 | #define cpuset_current_mems_allowed (node_states[N_HIGH_MEMORY]) | 101 | #define cpuset_current_mems_allowed (node_states[N_HIGH_MEMORY]) |
104 | static inline void cpuset_init_current_mems_allowed(void) {} | 102 | static inline void cpuset_init_current_mems_allowed(void) {} |
105 | static inline void cpuset_update_task_memory_state(void) {} | 103 | static inline void cpuset_update_task_memory_state(void) {} |
106 | #define cpuset_nodes_subset_current_mems_allowed(nodes) (1) | ||
107 | 104 | ||
108 | static inline int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl) | 105 | static inline int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl) |
109 | { | 106 | { |