diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-09-03 18:54:41 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-09-05 03:05:42 -0400 |
commit | 5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1 (patch) | |
tree | 91ed9ef6f4cb5f6a1832f2baaaabd53fcd83513e /mm/filemap.c | |
parent | 048c27fd72816b44e096997d1c6901c3abbfd45b (diff) |
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.
The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention. However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split. Certainly the split is mere
overhead in the common case of a single swap device.
So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).
If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r-- | mm/filemap.c | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index c11418dd94e8..edc54436fa94 100644 --- a/mm/filemap.c +++ b/mm/filemap.c | |||
@@ -54,9 +54,8 @@ | |||
54 | * | 54 | * |
55 | * ->i_mmap_lock (vmtruncate) | 55 | * ->i_mmap_lock (vmtruncate) |
56 | * ->private_lock (__free_pte->__set_page_dirty_buffers) | 56 | * ->private_lock (__free_pte->__set_page_dirty_buffers) |
57 | * ->swap_list_lock | 57 | * ->swap_lock (exclusive_swap_page, others) |
58 | * ->swap_device_lock (exclusive_swap_page, others) | 58 | * ->mapping->tree_lock |
59 | * ->mapping->tree_lock | ||
60 | * | 59 | * |
61 | * ->i_sem | 60 | * ->i_sem |
62 | * ->i_mmap_lock (truncate->unmap_mapping_range) | 61 | * ->i_mmap_lock (truncate->unmap_mapping_range) |
@@ -86,7 +85,7 @@ | |||
86 | * ->page_table_lock (anon_vma_prepare and various) | 85 | * ->page_table_lock (anon_vma_prepare and various) |
87 | * | 86 | * |
88 | * ->page_table_lock | 87 | * ->page_table_lock |
89 | * ->swap_device_lock (try_to_unmap_one) | 88 | * ->swap_lock (try_to_unmap_one) |
90 | * ->private_lock (try_to_unmap_one) | 89 | * ->private_lock (try_to_unmap_one) |
91 | * ->tree_lock (try_to_unmap_one) | 90 | * ->tree_lock (try_to_unmap_one) |
92 | * ->zone.lru_lock (follow_page->mark_page_accessed) | 91 | * ->zone.lru_lock (follow_page->mark_page_accessed) |