diff options
author | Arun KS <arunks@codeaurora.org> | 2018-12-28 03:34:20 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-12-28 15:11:47 -0500 |
commit | 3d6357de8aa09e1966770dc1171c72679946464f (patch) | |
tree | a24bb2f8795ea4927c1ff64d5f5f00398bee925d /mm/page_alloc.c | |
parent | fecd4a50baaebf6bde7f9e0b88fef13ffe5d98a1 (diff) |
mm: reference totalram_pages and managed_pages once per function
Patch series "mm: convert totalram_pages, totalhigh_pages and managed
pages to atomic", v5.
This series converts totalram_pages, totalhigh_pages and
zone->managed_pages to atomic variables.
totalram_pages, zone->managed_pages and totalhigh_pages updates are
protected by managed_page_count_lock, but readers never care about it.
Convert these variables to atomic to avoid readers potentially seeing a
store tear.
Main motivation was that managed_page_count_lock handling was complicating
things. It was discussed in length here,
https://lore.kernel.org/patchwork/patch/995739/#1181785 It seemes better
to remove the lock and convert variables to atomic. With the change,
preventing poteintial store-to-read tearing comes as a bonus.
This patch (of 4):
This is in preparation to a later patch which converts totalram_pages and
zone->managed_pages to atomic variables. Please note that re-reading the
value might lead to a different value and as such it could lead to
unexpected behavior. There are no known bugs as a result of the current
code but it is better to prevent from them in principle.
Link: http://lkml.kernel.org/r/1542090790-21750-2-git-send-email-arunks@codeaurora.org
Signed-off-by: Arun KS <arunks@codeaurora.org>
Reviewed-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c82c41edfe22..b79e79caea99 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c | |||
@@ -7257,6 +7257,7 @@ static void calculate_totalreserve_pages(void) | |||
7257 | for (i = 0; i < MAX_NR_ZONES; i++) { | 7257 | for (i = 0; i < MAX_NR_ZONES; i++) { |
7258 | struct zone *zone = pgdat->node_zones + i; | 7258 | struct zone *zone = pgdat->node_zones + i; |
7259 | long max = 0; | 7259 | long max = 0; |
7260 | unsigned long managed_pages = zone->managed_pages; | ||
7260 | 7261 | ||
7261 | /* Find valid and maximum lowmem_reserve in the zone */ | 7262 | /* Find valid and maximum lowmem_reserve in the zone */ |
7262 | for (j = i; j < MAX_NR_ZONES; j++) { | 7263 | for (j = i; j < MAX_NR_ZONES; j++) { |
@@ -7267,8 +7268,8 @@ static void calculate_totalreserve_pages(void) | |||
7267 | /* we treat the high watermark as reserved pages. */ | 7268 | /* we treat the high watermark as reserved pages. */ |
7268 | max += high_wmark_pages(zone); | 7269 | max += high_wmark_pages(zone); |
7269 | 7270 | ||
7270 | if (max > zone->managed_pages) | 7271 | if (max > managed_pages) |
7271 | max = zone->managed_pages; | 7272 | max = managed_pages; |
7272 | 7273 | ||
7273 | pgdat->totalreserve_pages += max; | 7274 | pgdat->totalreserve_pages += max; |
7274 | 7275 | ||