diff options
author | Dan Streetman <ddstreet@ieee.org> | 2014-06-04 19:09:59 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-06-04 19:54:07 -0400 |
commit | 18ab4d4ced0817421e6db6940374cc39d28d65da (patch) | |
tree | 1fc3911f333a37b21c39e862a6df70140c2e6202 /mm/frontswap.c | |
parent | a75f232ce0fe38bd01301899ecd97ffd0254316a (diff) |
swap: change swap_list_head to plist, add swap_avail_head
Originally get_swap_page() started iterating through the singly-linked
list of swap_info_structs using swap_list.next or highest_priority_index,
which both were intended to point to the highest priority active swap
target that was not full. The first patch in this series changed the
singly-linked list to a doubly-linked list, and removed the logic to start
at the highest priority non-full entry; it starts scanning at the highest
priority entry each time, even if the entry is full.
Replace the manually ordered swap_list_head with a plist, swap_active_head.
Add a new plist, swap_avail_head. The original swap_active_head plist
contains all active swap_info_structs, as before, while the new
swap_avail_head plist contains only swap_info_structs that are active and
available, i.e. not full. Add a new spinlock, swap_avail_lock, to protect
the swap_avail_head list.
Mel Gorman suggested using plists since they internally handle ordering
the list entries based on priority, which is exactly what swap was doing
manually. All the ordering code is now removed, and swap_info_struct
entries and simply added to their corresponding plist and automatically
ordered correctly.
Using a new plist for available swap_info_structs simplifies and
optimizes get_swap_page(), which no longer has to iterate over full
swap_info_structs. Using a new spinlock for swap_avail_head plist
allows each swap_info_struct to add or remove themselves from the
plist when they become full or not-full; previously they could not
do so because the swap_info_struct->lock is held when they change
from full<->not-full, and the swap_lock protecting the main
swap_active_head must be ordered before any swap_info_struct->lock.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Weijie Yang <weijieut@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/frontswap.c')
-rw-r--r-- | mm/frontswap.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/frontswap.c b/mm/frontswap.c index fae11602e8a9..c30eec536f03 100644 --- a/mm/frontswap.c +++ b/mm/frontswap.c | |||
@@ -331,7 +331,7 @@ static unsigned long __frontswap_curr_pages(void) | |||
331 | struct swap_info_struct *si = NULL; | 331 | struct swap_info_struct *si = NULL; |
332 | 332 | ||
333 | assert_spin_locked(&swap_lock); | 333 | assert_spin_locked(&swap_lock); |
334 | list_for_each_entry(si, &swap_list_head, list) | 334 | plist_for_each_entry(si, &swap_active_head, list) |
335 | totalpages += atomic_read(&si->frontswap_pages); | 335 | totalpages += atomic_read(&si->frontswap_pages); |
336 | return totalpages; | 336 | return totalpages; |
337 | } | 337 | } |
@@ -346,7 +346,7 @@ static int __frontswap_unuse_pages(unsigned long total, unsigned long *unused, | |||
346 | unsigned long pages = 0, pages_to_unuse = 0; | 346 | unsigned long pages = 0, pages_to_unuse = 0; |
347 | 347 | ||
348 | assert_spin_locked(&swap_lock); | 348 | assert_spin_locked(&swap_lock); |
349 | list_for_each_entry(si, &swap_list_head, list) { | 349 | plist_for_each_entry(si, &swap_active_head, list) { |
350 | si_frontswap_pages = atomic_read(&si->frontswap_pages); | 350 | si_frontswap_pages = atomic_read(&si->frontswap_pages); |
351 | if (total_pages_to_unuse < si_frontswap_pages) { | 351 | if (total_pages_to_unuse < si_frontswap_pages) { |
352 | pages = pages_to_unuse = total_pages_to_unuse; | 352 | pages = pages_to_unuse = total_pages_to_unuse; |
@@ -408,7 +408,7 @@ void frontswap_shrink(unsigned long target_pages) | |||
408 | /* | 408 | /* |
409 | * we don't want to hold swap_lock while doing a very | 409 | * we don't want to hold swap_lock while doing a very |
410 | * lengthy try_to_unuse, but swap_list may change | 410 | * lengthy try_to_unuse, but swap_list may change |
411 | * so restart scan from swap_list_head each time | 411 | * so restart scan from swap_active_head each time |
412 | */ | 412 | */ |
413 | spin_lock(&swap_lock); | 413 | spin_lock(&swap_lock); |
414 | ret = __frontswap_shrink(target_pages, &pages_to_unuse, &type); | 414 | ret = __frontswap_shrink(target_pages, &pages_to_unuse, &type); |