aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/nodemask.h
Commit message (Collapse)AuthorAge
* [PATCH] bitmap: parse input from kernel and user buffersReinette Chatre2006-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | lib/bitmap.c:bitmap_parse() is a library function that received as input a user buffer. This seemed to have originated from the way the write_proc function of the /proc filesystem operates. This has been reworked to not use kmalloc and eliminates a lot of get_user() overhead by performing one access_ok before using __get_user(). We need to test if we are in kernel or user space (is_user) and access the buffer differently. We cannot use __get_user() to access kernel addresses in all cases, for example in architectures with separate address space for kernel and user. This function will be useful for other uses as well; for example, taking input for /sysfs instead of /proc, so it was changed to accept kernel buffers. We have this use for the Linux UWB project, as part as the upcoming bandwidth allocator code. Only a few routines used this function and they were changed too. Signed-off-by: Reinette Chatre <reinette.chatre@linux.intel.com> Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com> Cc: Paul Jackson <pj@sgi.com> Cc: Joe Korty <joe.korty@ccur.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpumask: add highest_possible_node_idGreg Banks2006-10-02
| | | | | | | | | | | cpumask: add highest_possible_node_id(), analogous to highest_possible_processor_id(). [pj@sgi.com: fix typo] Signed-off-by: Greg Banks <gnb@melbourne.sgi.com> Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] define for_each_online_pgdatKAMEZAWA Hiroyuki2006-03-27
| | | | | | | | | | | | | | | | | | | | | | This patch defines for_each_online_pgdat() as a replacement of for_each_pgdat() Now, online nodes are managed by node_online_map. But for_each_pgdat() uses pgdat_link to iterate over all nodes(pgdat). This means management structure for online pgdat is duplicated. I think using node_online_map for for_each_pgdat() is simple and sane rather ather than pgdat_link. New macro is named as for_each_online_pgdat(). Following patch will fix callers of for_each_pgdat(). The bootmem allocater uses for_each_pgdat() before pgdat initialization. I don't think it's sane. Following patch will fix it. Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] remove bogus asm/bug.h includes.Al Viro2006-02-07
| | | | | | | A bunch of asm/bug.h includes are both not needed (since it will get pulled anyway) and bogus (since they are done too early). Removed. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] cpusets: bitmap and mask remap operatorsPaul Jackson2005-10-30
| | | | | | | | | | | | | | | | | | | | | In the forthcoming task migration support, a key calculation will be mapping cpu and node numbers from the old set to the new set while preserving cpuset-relative offset. For example, if a task and its pages on nodes 8-11 are being migrated to nodes 24-27, then pages on node 9 (the 2nd node in the old set) should be moved to node 25 (the 2nd node in the new set.) As with other bitmap operations, the proper way to code this is to provide the underlying calculation in lib/bitmap.c, and then to provide the usual cpumask and nodemask wrappers. This patch provides that. These operations are termed 'remap' operations. Both remapping a single bit and a set of bits is supported. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!