aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/vm/locking
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2005-09-03 18:54:41 -0400
committerLinus Torvalds <torvalds@evo.osdl.org>2005-09-05 03:05:42 -0400
commit5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1 (patch)
tree91ed9ef6f4cb5f6a1832f2baaaabd53fcd83513e /Documentation/vm/locking
parent048c27fd72816b44e096997d1c6901c3abbfd45b (diff)
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all, is appealing; but in practice almost every holder of swap_device_lock must already hold swap_list_lock, which defeats the purpose of the split. The only exceptions have been swap_duplicate, valid_swaphandles and an untrodden path in try_to_unuse (plus a few places added in this series). valid_swaphandles doesn't show up high in profiles, but swap_duplicate does demand attention. However, with the hold time in get_swap_pages so much reduced, I've not yet found a load and set of swap device priorities to show even swap_duplicate benefitting from the split. Certainly the split is mere overhead in the common case of a single swap device. So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock (generally we seem to prefer an _ in the name, and not hide in a macro). If someone can show a regression in swap_duplicate, then probably we should add a hashlock for the swap_map entries alone (shorts being anatomic), so as to help the case of the single swap device too. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'Documentation/vm/locking')
-rw-r--r--Documentation/vm/locking15
1 files changed, 7 insertions, 8 deletions
diff --git a/Documentation/vm/locking b/Documentation/vm/locking
index c3ef09ae3bb1..f366fa956179 100644
--- a/Documentation/vm/locking
+++ b/Documentation/vm/locking
@@ -83,19 +83,18 @@ single address space optimization, so that the zap_page_range (from
83vmtruncate) does not lose sending ipi's to cloned threads that might 83vmtruncate) does not lose sending ipi's to cloned threads that might
84be spawned underneath it and go to user mode to drag in pte's into tlbs. 84be spawned underneath it and go to user mode to drag in pte's into tlbs.
85 85
86swap_list_lock/swap_device_lock 86swap_lock
87------------------------------- 87--------------
88The swap devices are chained in priority order from the "swap_list" header. 88The swap devices are chained in priority order from the "swap_list" header.
89The "swap_list" is used for the round-robin swaphandle allocation strategy. 89The "swap_list" is used for the round-robin swaphandle allocation strategy.
90The #free swaphandles is maintained in "nr_swap_pages". These two together 90The #free swaphandles is maintained in "nr_swap_pages". These two together
91are protected by the swap_list_lock. 91are protected by the swap_lock.
92 92
93The swap_device_lock, which is per swap device, protects the reference 93The swap_lock also protects all the device reference counts on the
94counts on the corresponding swaphandles, maintained in the "swap_map" 94corresponding swaphandles, maintained in the "swap_map" array, and the
95array, and the "highest_bit" and "lowest_bit" fields. 95"highest_bit" and "lowest_bit" fields.
96 96
97Both of these are spinlocks, and are never acquired from intr level. The 97The swap_lock is a spinlock, and is never acquired from intr level.
98locking hierarchy is swap_list_lock -> swap_device_lock.
99 98
100To prevent races between swap space deletion or async readahead swapins 99To prevent races between swap space deletion or async readahead swapins
101deciding whether a swap handle is being used, ie worthy of being read in 100deciding whether a swap handle is being used, ie worthy of being read in