diff options
author | Minchan Kim <minchan.kim@gmail.com> | 2011-10-31 20:09:28 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-10-31 20:30:50 -0400 |
commit | 21ee9f398be209ccbb62929d35961ca1ed48eec3 (patch) | |
tree | 4127e14f4a07a2cb7bc6eb902e9c7b0baab8e84f /mm | |
parent | 2f1da6421570d064a94e17190a4955c2df99794d (diff) |
vmscan: add barrier to prevent evictable page in unevictable list
When a race between putback_lru_page() and shmem_lock with lock=0 happens,
progrom execution order is as follows, but clear_bit in processor #1 could
be reordered right before spin_unlock of processor #1. Then, the page
would be stranded on the unevictable list.
spin_lock
SetPageLRU
spin_unlock
clear_bit(AS_UNEVICTABLE)
spin_lock
if PageLRU()
if !test_bit(AS_UNEVICTABLE)
move evictable list
smp_mb
if !test_bit(AS_UNEVICTABLE)
move evictable list
spin_unlock
But, pagevec_lookup() in scan_mapping_unevictable_pages() has
rcu_read_[un]lock() so it could protect reordering before reaching
test_bit(AS_UNEVICTABLE) on processor #1 so this problem never happens.
But it's a unexpected side effect and we should solve this problem
properly.
This patch adds a barrier after mapping_clear_unevictable.
I didn't meet this problem but just found during review.
Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/shmem.c | 6 | ||||
-rw-r--r-- | mm/vmscan.c | 11 |
2 files changed, 12 insertions, 5 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index 2d3577295298..fa4fa6ce13bc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c | |||
@@ -1068,6 +1068,12 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user) | |||
1068 | user_shm_unlock(inode->i_size, user); | 1068 | user_shm_unlock(inode->i_size, user); |
1069 | info->flags &= ~VM_LOCKED; | 1069 | info->flags &= ~VM_LOCKED; |
1070 | mapping_clear_unevictable(file->f_mapping); | 1070 | mapping_clear_unevictable(file->f_mapping); |
1071 | /* | ||
1072 | * Ensure that a racing putback_lru_page() can see | ||
1073 | * the pages of this mapping are evictable when we | ||
1074 | * skip them due to !PageLRU during the scan. | ||
1075 | */ | ||
1076 | smp_mb__after_clear_bit(); | ||
1071 | scan_mapping_unevictable_pages(file->f_mapping); | 1077 | scan_mapping_unevictable_pages(file->f_mapping); |
1072 | } | 1078 | } |
1073 | retval = 0; | 1079 | retval = 0; |
diff --git a/mm/vmscan.c b/mm/vmscan.c index 3886b0bd7869..f51a33e8ed89 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c | |||
@@ -633,13 +633,14 @@ redo: | |||
633 | lru = LRU_UNEVICTABLE; | 633 | lru = LRU_UNEVICTABLE; |
634 | add_page_to_unevictable_list(page); | 634 | add_page_to_unevictable_list(page); |
635 | /* | 635 | /* |
636 | * When racing with an mlock clearing (page is | 636 | * When racing with an mlock or AS_UNEVICTABLE clearing |
637 | * unlocked), make sure that if the other thread does | 637 | * (page is unlocked) make sure that if the other thread |
638 | * not observe our setting of PG_lru and fails | 638 | * does not observe our setting of PG_lru and fails |
639 | * isolation, we see PG_mlocked cleared below and move | 639 | * isolation/check_move_unevictable_page, |
640 | * we see PG_mlocked/AS_UNEVICTABLE cleared below and move | ||
640 | * the page back to the evictable list. | 641 | * the page back to the evictable list. |
641 | * | 642 | * |
642 | * The other side is TestClearPageMlocked(). | 643 | * The other side is TestClearPageMlocked() or shmem_lock(). |
643 | */ | 644 | */ |
644 | smp_mb(); | 645 | smp_mb(); |
645 | } | 646 | } |