diff options
author | Hugh Dickins <hughd@google.com> | 2016-05-19 20:12:41 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-19 22:12:14 -0400 |
commit | fa9949da59a15017a02c86b087c7499d7b5702be (patch) | |
tree | 1191eac8b64eb3ba21b1286d9f3842db0388af82 /mm/swap_state.c | |
parent | 9d5e6a9f22311b00a20ff9b072760ad3e73f0d99 (diff) |
mm: use __SetPageSwapBacked and dont ClearPageSwapBacked
v3.16 commit 07a427884348 ("mm: shmem: avoid atomic operation during
shmem_getpage_gfp") rightly replaced one instance of SetPageSwapBacked
by __SetPageSwapBacked, pointing out that the newly allocated page is
not yet visible to other users (except speculative get_page_unless_zero-
ers, who may not update page flags before their further checks).
That was part of a series in which Mel was focused on tmpfs profiles:
but almost all SetPageSwapBacked uses can be so optimized, with the same
justification.
Remove ClearPageSwapBacked from __read_swap_cache_async() error path:
it's not an error to free a page with PG_swapbacked set.
Follow a convention of __SetPageLocked, __SetPageSwapBacked instead of
doing it differently in different places; but that's for tidiness - if
the ordering actually mattered, we should not be using the __variants.
There's probably scope for further __SetPageFlags in other places, but
SwapBacked is the one I'm interested in at the moment.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Reviewed-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap_state.c')
-rw-r--r-- | mm/swap_state.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/mm/swap_state.c b/mm/swap_state.c index 366ce3518703..0d457e7db8d6 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c | |||
@@ -358,7 +358,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, | |||
358 | 358 | ||
359 | /* May fail (-ENOMEM) if radix-tree node allocation failed. */ | 359 | /* May fail (-ENOMEM) if radix-tree node allocation failed. */ |
360 | __SetPageLocked(new_page); | 360 | __SetPageLocked(new_page); |
361 | SetPageSwapBacked(new_page); | 361 | __SetPageSwapBacked(new_page); |
362 | err = __add_to_swap_cache(new_page, entry); | 362 | err = __add_to_swap_cache(new_page, entry); |
363 | if (likely(!err)) { | 363 | if (likely(!err)) { |
364 | radix_tree_preload_end(); | 364 | radix_tree_preload_end(); |
@@ -370,7 +370,6 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, | |||
370 | return new_page; | 370 | return new_page; |
371 | } | 371 | } |
372 | radix_tree_preload_end(); | 372 | radix_tree_preload_end(); |
373 | ClearPageSwapBacked(new_page); | ||
374 | __ClearPageLocked(new_page); | 373 | __ClearPageLocked(new_page); |
375 | /* | 374 | /* |
376 | * add_to_swap_cache() doesn't return -EEXIST, so we can safely | 375 | * add_to_swap_cache() doesn't return -EEXIST, so we can safely |