diff options
author | Michal Hocko <mhocko@suse.cz> | 2013-02-22 19:32:30 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-23 20:50:10 -0500 |
commit | a394cb8ee632ec5edce20309901ec66767497a43 (patch) | |
tree | f1b02c0329a8614810efe5a1f45f51ae64d46d33 /mm/vmscan.c | |
parent | 4ca3a69bcb6875c3f20802522c1b4fc56bb14608 (diff) |
memcg,vmscan: do not break out targeted reclaim without reclaimed pages
Targeted (hard resp soft) reclaim has traditionally tried to scan one
group with decreasing priority until nr_to_reclaim (SWAP_CLUSTER_MAX
pages) is reclaimed or all priorities are exhausted. The reclaim is
then retried until the limit is met.
This approach, however, doesn't work well with deeper hierarchies where
groups higher in the hierarchy do not have any or only very few pages
(this usually happens if those groups do not have any tasks and they
have only re-parented pages after some of their children is removed).
Those groups are reclaimed with decreasing priority pointlessly as there
is nothing to reclaim from them.
An easiest fix is to break out of the memcg iteration loop in
shrink_zone only if the whole hierarchy has been visited or sufficient
pages have been reclaimed. This is also more natural because the
reclaimer expects that the hierarchy under the given root is reclaimed.
As a result we can simplify the soft limit reclaim which does its own
iteration.
[yinghan@google.com: break out of the hierarchy loop only if nr_reclaimed exceeded nr_to_reclaim]
[akpm@linux-foundation.org: use conventional comparison order]
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Reported-by: Ying Han <yinghan@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Glauber Costa <glommer@parallels.com>
Cc: Li Zefan <lizefan@huawei.com>
Signed-off-by: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 19 |
1 files changed, 9 insertions, 10 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 292f50a2a685..463990941a78 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c | |||
@@ -1973,18 +1973,17 @@ static void shrink_zone(struct zone *zone, struct scan_control *sc) | |||
1973 | shrink_lruvec(lruvec, sc); | 1973 | shrink_lruvec(lruvec, sc); |
1974 | 1974 | ||
1975 | /* | 1975 | /* |
1976 | * Limit reclaim has historically picked one | 1976 | * Direct reclaim and kswapd have to scan all memory |
1977 | * memcg and scanned it with decreasing | 1977 | * cgroups to fulfill the overall scan target for the |
1978 | * priority levels until nr_to_reclaim had | ||
1979 | * been reclaimed. This priority cycle is | ||
1980 | * thus over after a single memcg. | ||
1981 | * | ||
1982 | * Direct reclaim and kswapd, on the other | ||
1983 | * hand, have to scan all memory cgroups to | ||
1984 | * fulfill the overall scan target for the | ||
1985 | * zone. | 1978 | * zone. |
1979 | * | ||
1980 | * Limit reclaim, on the other hand, only cares about | ||
1981 | * nr_to_reclaim pages to be reclaimed and it will | ||
1982 | * retry with decreasing priority if one round over the | ||
1983 | * whole hierarchy is not sufficient. | ||
1986 | */ | 1984 | */ |
1987 | if (!global_reclaim(sc)) { | 1985 | if (!global_reclaim(sc) && |
1986 | sc->nr_reclaimed >= sc->nr_to_reclaim) { | ||
1988 | mem_cgroup_iter_break(root, memcg); | 1987 | mem_cgroup_iter_break(root, memcg); |
1989 | break; | 1988 | break; |
1990 | } | 1989 | } |