diff options
author | Tejun Heo <tj@kernel.org> | 2009-03-30 06:07:44 -0400 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2009-06-24 02:13:35 -0400 |
commit | e74e396204bfcb67570ba4517b08f5918e69afea (patch) | |
tree | df57c859e10f7fcbe5790e9b51a106d5bccfe8dc /mm/percpu.c | |
parent | 0017c869ddcb73069905d09f9e98e68627466237 (diff) |
percpu: use dynamic percpu allocator as the default percpu allocator
This patch makes most !CONFIG_HAVE_SETUP_PER_CPU_AREA archs use
dynamic percpu allocator. The first chunk is allocated using
embedding helper and 8k is reserved for modules. This ensures that
the new allocator behaves almost identically to the original allocator
as long as static percpu variables are concerned, so it shouldn't
introduce much breakage.
s390 and alpha use custom SHIFT_PERCPU_PTR() to work around addressing
range limit the addressing model imposes. Unfortunately, this breaks
if the address is specified using a variable, so for now, the two
archs aren't converted.
The following architectures are affected by this change.
* sh
* arm
* cris
* mips
* sparc(32)
* blackfin
* avr32
* parisc (broken, under investigation)
* m32r
* powerpc(32)
As this change makes the dynamic allocator the default one,
CONFIG_HAVE_DYNAMIC_PER_CPU_AREA is replaced with its invert -
CONFIG_HAVE_LEGACY_PER_CPU_AREA, which is added to yet-to-be converted
archs. These archs implement their own setup_per_cpu_areas() and the
conversion is not trivial.
* powerpc(64)
* sparc(64)
* ia64
* alpha
* s390
Boot and batch alloc/free tests on x86_32 with debug code (x86_32
doesn't use default first chunk initialization). Compile tested on
sparc(32), powerpc(32), arm and alpha.
Kyle McMartin reported that this change breaks parisc. The problem is
still under investigation and he is okay with pushing this patch
forward and fixing parisc later.
[ Impact: use dynamic allocator for most archs w/o custom percpu setup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Christoph Lameter <cl@linux.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Bryan Wu <cooloney@kernel.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Grant Grundler <grundler@parisc-linux.org>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'mm/percpu.c')
-rw-r--r-- | mm/percpu.c | 40 |
1 files changed, 39 insertions, 1 deletions
diff --git a/mm/percpu.c b/mm/percpu.c index b70f2acd8853..b14984566f5a 100644 --- a/mm/percpu.c +++ b/mm/percpu.c | |||
@@ -43,7 +43,7 @@ | |||
43 | * | 43 | * |
44 | * To use this allocator, arch code should do the followings. | 44 | * To use this allocator, arch code should do the followings. |
45 | * | 45 | * |
46 | * - define CONFIG_HAVE_DYNAMIC_PER_CPU_AREA | 46 | * - drop CONFIG_HAVE_LEGACY_PER_CPU_AREA |
47 | * | 47 | * |
48 | * - define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() to translate | 48 | * - define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() to translate |
49 | * regular address to percpu pointer and back if they need to be | 49 | * regular address to percpu pointer and back if they need to be |
@@ -1275,3 +1275,41 @@ ssize_t __init pcpu_embed_first_chunk(size_t static_size, size_t reserved_size, | |||
1275 | reserved_size, dyn_size, | 1275 | reserved_size, dyn_size, |
1276 | pcpue_unit_size, pcpue_ptr, NULL); | 1276 | pcpue_unit_size, pcpue_ptr, NULL); |
1277 | } | 1277 | } |
1278 | |||
1279 | /* | ||
1280 | * Generic percpu area setup. | ||
1281 | * | ||
1282 | * The embedding helper is used because its behavior closely resembles | ||
1283 | * the original non-dynamic generic percpu area setup. This is | ||
1284 | * important because many archs have addressing restrictions and might | ||
1285 | * fail if the percpu area is located far away from the previous | ||
1286 | * location. As an added bonus, in non-NUMA cases, embedding is | ||
1287 | * generally a good idea TLB-wise because percpu area can piggy back | ||
1288 | * on the physical linear memory mapping which uses large page | ||
1289 | * mappings on applicable archs. | ||
1290 | */ | ||
1291 | #ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA | ||
1292 | unsigned long __per_cpu_offset[NR_CPUS] __read_mostly; | ||
1293 | EXPORT_SYMBOL(__per_cpu_offset); | ||
1294 | |||
1295 | void __init setup_per_cpu_areas(void) | ||
1296 | { | ||
1297 | size_t static_size = __per_cpu_end - __per_cpu_start; | ||
1298 | ssize_t unit_size; | ||
1299 | unsigned long delta; | ||
1300 | unsigned int cpu; | ||
1301 | |||
1302 | /* | ||
1303 | * Always reserve area for module percpu variables. That's | ||
1304 | * what the legacy allocator did. | ||
1305 | */ | ||
1306 | unit_size = pcpu_embed_first_chunk(static_size, PERCPU_MODULE_RESERVE, | ||
1307 | PERCPU_DYNAMIC_RESERVE, -1); | ||
1308 | if (unit_size < 0) | ||
1309 | panic("Failed to initialized percpu areas."); | ||
1310 | |||
1311 | delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start; | ||
1312 | for_each_possible_cpu(cpu) | ||
1313 | __per_cpu_offset[cpu] = delta + cpu * unit_size; | ||
1314 | } | ||
1315 | #endif /* CONFIG_HAVE_SETUP_PER_CPU_AREA */ | ||