diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-09-03 18:54:41 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-09-05 03:05:42 -0400 |
commit | 5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1 (patch) | |
tree | 91ed9ef6f4cb5f6a1832f2baaaabd53fcd83513e /Documentation | |
parent | 048c27fd72816b44e096997d1c6901c3abbfd45b (diff) |
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all,
is appealing; but in practice almost every holder of swap_device_lock must
already hold swap_list_lock, which defeats the purpose of the split.
The only exceptions have been swap_duplicate, valid_swaphandles and an
untrodden path in try_to_unuse (plus a few places added in this series).
valid_swaphandles doesn't show up high in profiles, but swap_duplicate does
demand attention. However, with the hold time in get_swap_pages so much
reduced, I've not yet found a load and set of swap device priorities to show
even swap_duplicate benefitting from the split. Certainly the split is mere
overhead in the common case of a single swap device.
So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock
(generally we seem to prefer an _ in the name, and not hide in a macro).
If someone can show a regression in swap_duplicate, then probably we should
add a hashlock for the swap_map entries alone (shorts being anatomic), so as
to help the case of the single swap device too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/vm/locking | 15 |
1 files changed, 7 insertions, 8 deletions
diff --git a/Documentation/vm/locking b/Documentation/vm/locking index c3ef09ae3bb1..f366fa956179 100644 --- a/Documentation/vm/locking +++ b/Documentation/vm/locking | |||
@@ -83,19 +83,18 @@ single address space optimization, so that the zap_page_range (from | |||
83 | vmtruncate) does not lose sending ipi's to cloned threads that might | 83 | vmtruncate) does not lose sending ipi's to cloned threads that might |
84 | be spawned underneath it and go to user mode to drag in pte's into tlbs. | 84 | be spawned underneath it and go to user mode to drag in pte's into tlbs. |
85 | 85 | ||
86 | swap_list_lock/swap_device_lock | 86 | swap_lock |
87 | ------------------------------- | 87 | -------------- |
88 | The swap devices are chained in priority order from the "swap_list" header. | 88 | The swap devices are chained in priority order from the "swap_list" header. |
89 | The "swap_list" is used for the round-robin swaphandle allocation strategy. | 89 | The "swap_list" is used for the round-robin swaphandle allocation strategy. |
90 | The #free swaphandles is maintained in "nr_swap_pages". These two together | 90 | The #free swaphandles is maintained in "nr_swap_pages". These two together |
91 | are protected by the swap_list_lock. | 91 | are protected by the swap_lock. |
92 | 92 | ||
93 | The swap_device_lock, which is per swap device, protects the reference | 93 | The swap_lock also protects all the device reference counts on the |
94 | counts on the corresponding swaphandles, maintained in the "swap_map" | 94 | corresponding swaphandles, maintained in the "swap_map" array, and the |
95 | array, and the "highest_bit" and "lowest_bit" fields. | 95 | "highest_bit" and "lowest_bit" fields. |
96 | 96 | ||
97 | Both of these are spinlocks, and are never acquired from intr level. The | 97 | The swap_lock is a spinlock, and is never acquired from intr level. |
98 | locking hierarchy is swap_list_lock -> swap_device_lock. | ||
99 | 98 | ||
100 | To prevent races between swap space deletion or async readahead swapins | 99 | To prevent races between swap space deletion or async readahead swapins |
101 | deciding whether a swap handle is being used, ie worthy of being read in | 100 | deciding whether a swap handle is being used, ie worthy of being read in |