diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2009-01-07 21:07:56 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-08 11:31:05 -0500 |
commit | d13d144309d2e5a3e6ad978b16c1d0226ddc9231 (patch) | |
tree | 37c19902b527823956db969d9428737081b2a94d /mm/shmem.c | |
parent | c1e862c1f5ad34771b6d0a528cf681e0dcad7c86 (diff) |
memcg: handle swap caches
SwapCache support for memory resource controller (memcg)
Before mem+swap controller, memcg itself should handle SwapCache in proper
way. This is cut-out from it.
In current memcg, SwapCache is just leaked and the user can create tons of
SwapCache. This is a leak of account and should be handled.
SwapCache accounting is done as following.
charge (anon)
- charged when it's mapped.
(because of readahead, charge at add_to_swap_cache() is not sane)
uncharge (anon)
- uncharged when it's dropped from swapcache and fully unmapped.
means it's not uncharged at unmap.
Note: delete from swap cache at swap-in is done after rmap information
is established.
charge (shmem)
- charged at swap-in. this prevents charge at add_to_page_cache().
uncharge (shmem)
- uncharged when it's dropped from swapcache and not on shmem's
radix-tree.
at migration, check against 'old page' is modified to handle shmem.
Comparing to the old version discussed (and caused troubles), we have
advantages of
- PCG_USED bit.
- simple migrating handling.
So, situation is much easier than several months ago, maybe.
[hugh@veritas.com: memcg: handle swap caches build fix]
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Tested-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r-- | mm/shmem.c | 18 |
1 files changed, 16 insertions, 2 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index bd9b4ea307b2..adf5c3eedbc9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c | |||
@@ -928,8 +928,12 @@ found: | |||
928 | error = 1; | 928 | error = 1; |
929 | if (!inode) | 929 | if (!inode) |
930 | goto out; | 930 | goto out; |
931 | /* Charge page using GFP_HIGHUSER_MOVABLE while we can wait */ | 931 | /* |
932 | error = mem_cgroup_cache_charge(page, current->mm, GFP_HIGHUSER_MOVABLE); | 932 | * Charge page using GFP_HIGHUSER_MOVABLE while we can wait. |
933 | * charged back to the user(not to caller) when swap account is used. | ||
934 | */ | ||
935 | error = mem_cgroup_cache_charge_swapin(page, | ||
936 | current->mm, GFP_HIGHUSER_MOVABLE, true); | ||
933 | if (error) | 937 | if (error) |
934 | goto out; | 938 | goto out; |
935 | error = radix_tree_preload(GFP_KERNEL); | 939 | error = radix_tree_preload(GFP_KERNEL); |
@@ -1266,6 +1270,16 @@ repeat: | |||
1266 | goto repeat; | 1270 | goto repeat; |
1267 | } | 1271 | } |
1268 | wait_on_page_locked(swappage); | 1272 | wait_on_page_locked(swappage); |
1273 | /* | ||
1274 | * We want to avoid charge at add_to_page_cache(). | ||
1275 | * charge against this swap cache here. | ||
1276 | */ | ||
1277 | if (mem_cgroup_cache_charge_swapin(swappage, | ||
1278 | current->mm, gfp, false)) { | ||
1279 | page_cache_release(swappage); | ||
1280 | error = -ENOMEM; | ||
1281 | goto failed; | ||
1282 | } | ||
1269 | page_cache_release(swappage); | 1283 | page_cache_release(swappage); |
1270 | goto repeat; | 1284 | goto repeat; |
1271 | } | 1285 | } |