diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-10-29 21:16:40 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-10-30 00:40:42 -0400 |
commit | 4c21e2f2441dc5fbb957b030333f5a3f2d02dea7 (patch) | |
tree | 1f76d33bb1d76221c6424bc5fed080a4f91349a6 /arch/um/kernel | |
parent | b38c6845b695141259019e2b7c0fe6c32a6e720d (diff) |
[PATCH] mm: split page table lock
Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
a many-threaded application which concurrently initializes different parts of
a large anonymous area.
This patch corrects that, by using a separate spinlock per page table page, to
guard the page table entries in that page, instead of using the mm's single
page_table_lock. (But even then, page_table_lock is still used to guard page
table allocation, and anon_vma allocation.)
In this implementation, the spinlock is tucked inside the struct page of the
page table page: with a BUILD_BUG_ON in case it overflows - which it would in
the case of 32-bit PA-RISC with spinlock debugging enabled.
Splitting the lock is not quite for free: another cacheline access. Ideally,
I suppose we would use split ptlock only for multi-threaded processes on
multi-cpu machines; but deciding that dynamically would have its own costs.
So for now enable it by config, at some number of cpus - since the Kconfig
language doesn't support inequalities, let preprocessor compare that with
NR_CPUS. But I don't think it's worth being user-configurable: for good
testing of both split and unsplit configs, split now at 4 cpus, and perhaps
change that to 8 later.
There is a benefit even for singly threaded processes: kswapd can be attacking
one part of the mm while another part is busy faulting.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/um/kernel')
-rw-r--r-- | arch/um/kernel/skas/mmu.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/um/kernel/skas/mmu.c b/arch/um/kernel/skas/mmu.c index 02cf36e0331a..9e5e39cea821 100644 --- a/arch/um/kernel/skas/mmu.c +++ b/arch/um/kernel/skas/mmu.c | |||
@@ -144,6 +144,7 @@ void destroy_context_skas(struct mm_struct *mm) | |||
144 | 144 | ||
145 | if(!proc_mm || !ptrace_faultinfo){ | 145 | if(!proc_mm || !ptrace_faultinfo){ |
146 | free_page(mmu->id.stack); | 146 | free_page(mmu->id.stack); |
147 | pte_lock_deinit(virt_to_page(mmu->last_page_table)); | ||
147 | pte_free_kernel((pte_t *) mmu->last_page_table); | 148 | pte_free_kernel((pte_t *) mmu->last_page_table); |
148 | dec_page_state(nr_page_table_pages); | 149 | dec_page_state(nr_page_table_pages); |
149 | #ifdef CONFIG_3_LEVEL_PGTABLES | 150 | #ifdef CONFIG_3_LEVEL_PGTABLES |