aboutsummaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2016-05-19 20:10:49 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2016-05-19 22:12:14 -0400
commit0139aa7b7fa12ceef095d99dc36606a5b10ab83a (patch)
tree94da74f2f79911a11a3c7c34f73ba971dec41a7e /mm/vmscan.c
parent6d061f9f6136d477932088c24ce155d7dc785746 (diff)
mm: rename _count, field of the struct page, to _refcount
Many developers already know that field for reference count of the struct page is _count and atomic type. They would try to handle it directly and this could break the purpose of page reference count tracepoint. To prevent direct _count modification, this patch rename it to _refcount and add warning message on the code. After that, developer who need to handle reference count will find that field should not be accessed directly. [akpm@linux-foundation.org: fix comments, per Vlastimil] [akpm@linux-foundation.org: Documentation/vm/transhuge.txt too] [sfr@canb.auug.org.au: sync ethernet driver changes] Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Sunil Goutham <sgoutham@cavium.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Manish Chopra <manish.chopra@qlogic.com> Cc: Yuval Mintz <yuval.mintz@qlogic.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 142cb61f4822..d3a02ac3eed7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -633,7 +633,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
633 * 633 *
634 * Reversing the order of the tests ensures such a situation cannot 634 * Reversing the order of the tests ensures such a situation cannot
635 * escape unnoticed. The smp_rmb is needed to ensure the page->flags 635 * escape unnoticed. The smp_rmb is needed to ensure the page->flags
636 * load is not satisfied before that of page->_count. 636 * load is not satisfied before that of page->_refcount.
637 * 637 *
638 * Note that if SetPageDirty is always performed via set_page_dirty, 638 * Note that if SetPageDirty is always performed via set_page_dirty,
639 * and thus under tree_lock, then this ordering is not required. 639 * and thus under tree_lock, then this ordering is not required.
@@ -1720,7 +1720,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
1720 * It is safe to rely on PG_active against the non-LRU pages in here because 1720 * It is safe to rely on PG_active against the non-LRU pages in here because
1721 * nobody will play with that bit on a non-LRU page. 1721 * nobody will play with that bit on a non-LRU page.
1722 * 1722 *
1723 * The downside is that we have to touch page->_count against each page. 1723 * The downside is that we have to touch page->_refcount against each page.
1724 * But we had to alter page->flags anyway. 1724 * But we had to alter page->flags anyway.
1725 */ 1725 */
1726 1726