aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2005-09-03 18:54:39 -0400
committerLinus Torvalds <torvalds@evo.osdl.org>2005-09-05 03:05:41 -0400
commit52b7efdbe5f5696fc80338560a3fc51e0b0a993c (patch)
tree30162de9fc8fe3dddb6462f8ff82f1594067cadd /include
parent7dfad4183bf9cd92f977caa3c12cc74f0eefc0e6 (diff)
[PATCH] swap: scan_swap_map drop swap_device_lock
get_swap_page has often shown up on latency traces, doing lengthy scans while holding two spinlocks. swap_list_lock is already dropped, now scan_swap_map drop swap_device_lock before scanning the swap_map. While scanning for an empty cluster, don't worry that racing tasks may allocate what was free and free what was allocated; but when allocating an entry, check it's still free after retaking the lock. Avoid dropping the lock in the expected common path. No barriers beyond the locks, just let the cookie crumble; highest_bit limit is volatile, but benign. Guard against swapoff: must check SWP_WRITEOK before allocating, must raise SWP_SCANNING reference count while in scan_swap_map, swapoff wait for that to fall - just use schedule_timeout, we don't want to burden scan_swap_map itself, and it's very unlikely that anyone can really still be in scan_swap_map once swapoff gets this far. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/swap.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 93f0eca7f916..db3b5de7c92f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -107,6 +107,8 @@ enum {
107 SWP_USED = (1 << 0), /* is slot in swap_info[] used? */ 107 SWP_USED = (1 << 0), /* is slot in swap_info[] used? */
108 SWP_WRITEOK = (1 << 1), /* ok to write to this swap? */ 108 SWP_WRITEOK = (1 << 1), /* ok to write to this swap? */
109 SWP_ACTIVE = (SWP_USED | SWP_WRITEOK), 109 SWP_ACTIVE = (SWP_USED | SWP_WRITEOK),
110 /* add others here before... */
111 SWP_SCANNING = (1 << 8), /* refcount in scan_swap_map */
110}; 112};
111 113
112#define SWAP_CLUSTER_MAX 32 114#define SWAP_CLUSTER_MAX 32