aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/controllers
diff options
context:
space:
mode:
authorKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>2009-01-07 21:08:00 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-08 11:31:05 -0500
commit8c7c6e34a1256a5082d38c8e9bd1474476912715 (patch)
tree09f53c7c4bac5532a9ecbdadb4450702c744ea6f /Documentation/controllers
parent27a7faa0779dd13729196c1a818c294f44bbd1ee (diff)
memcg: mem+swap controller core
This patch implements per cgroup limit for usage of memory+swap. However there are SwapCache, double counting of swap-cache and swap-entry is avoided. Mem+Swap controller works as following. - memory usage is limited by memory.limit_in_bytes. - memory + swap usage is limited by memory.memsw_limit_in_bytes. This has following benefits. - A user can limit total resource usage of mem+swap. Without this, because memory resource controller doesn't take care of usage of swap, a process can exhaust all the swap (by memory leak.) We can avoid this case. And Swap is shared resource but it cannot be reclaimed (goes back to memory) until it's used. This characteristic can be trouble when the memory is divided into some parts by cpuset or memcg. Assume group A and group B. After some application executes, the system can be.. Group A -- very large free memory space but occupy 99% of swap. Group B -- under memory shortage but cannot use swap...it's nearly full. Ability to set appropriate swap limit for each group is required. Maybe someone wonder "why not swap but mem+swap ?" - The global LRU(kswapd) can swap out arbitrary pages. Swap-out means to move account from memory to swap...there is no change in usage of mem+swap. In other words, when we want to limit the usage of swap without affecting global LRU, mem+swap limit is better than just limiting swap. Accounting target information is stored in swap_cgroup which is per swap entry record. Charge is done as following. map - charge page and memsw. unmap - uncharge page/memsw if not SwapCache. swap-out (__delete_from_swap_cache) - uncharge page - record mem_cgroup information to swap_cgroup. swap-in (do_swap_page) - charged as page and memsw. record in swap_cgroup is cleared. memsw accounting is decremented. swap-free (swap_free()) - if swap entry is freed, memsw is uncharged by PAGE_SIZE. There are people work under never-swap environments and consider swap as something bad. For such people, this mem+swap controller extension is just an overhead. This overhead is avoided by config or boot option. (see Kconfig. detail is not in this patch.) TODO: - maybe more optimization can be don in swap-in path. (but not very safe.) But we just do simple accounting at this stage. [nishimura@mxp.nes.nec.co.jp: make resize limit hold mutex] [hugh@veritas.com: memswap controller core swapcache fixes] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/controllers')
-rw-r--r--Documentation/controllers/memory.txt29
1 files changed, 27 insertions, 2 deletions
diff --git a/Documentation/controllers/memory.txt b/Documentation/controllers/memory.txt
index 9fe2d0eabe05..05fe29ab1e58 100644
--- a/Documentation/controllers/memory.txt
+++ b/Documentation/controllers/memory.txt
@@ -137,12 +137,32 @@ behind this approach is that a cgroup that aggressively uses a shared
137page will eventually get charged for it (once it is uncharged from 137page will eventually get charged for it (once it is uncharged from
138the cgroup that brought it in -- this will happen on memory pressure). 138the cgroup that brought it in -- this will happen on memory pressure).
139 139
140Exception: When you do swapoff and make swapped-out pages of shmem(tmpfs) to 140Exception: If CONFIG_CGROUP_CGROUP_MEM_RES_CTLR_SWAP is not used..
141When you do swapoff and make swapped-out pages of shmem(tmpfs) to
141be backed into memory in force, charges for pages are accounted against the 142be backed into memory in force, charges for pages are accounted against the
142caller of swapoff rather than the users of shmem. 143caller of swapoff rather than the users of shmem.
143 144
144 145
1452.4 Reclaim 1462.4 Swap Extension (CONFIG_CGROUP_MEM_RES_CTLR_SWAP)
147Swap Extension allows you to record charge for swap. A swapped-in page is
148charged back to original page allocator if possible.
149
150When swap is accounted, following files are added.
151 - memory.memsw.usage_in_bytes.
152 - memory.memsw.limit_in_bytes.
153
154usage of mem+swap is limited by memsw.limit_in_bytes.
155
156Note: why 'mem+swap' rather than swap.
157The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
158to move account from memory to swap...there is no change in usage of
159mem+swap.
160
161In other words, when we want to limit the usage of swap without affecting
162global LRU, mem+swap limit is better than just limiting swap from OS point
163of view.
164
1652.5 Reclaim
146 166
147Each cgroup maintains a per cgroup LRU that consists of an active 167Each cgroup maintains a per cgroup LRU that consists of an active
148and inactive list. When a cgroup goes over its limit, we first try 168and inactive list. When a cgroup goes over its limit, we first try
@@ -246,6 +266,11 @@ Such charges are freed(at default) or moved to its parent. When moved,
246both of RSS and CACHES are moved to parent. 266both of RSS and CACHES are moved to parent.
247If both of them are busy, rmdir() returns -EBUSY. See 5.1 Also. 267If both of them are busy, rmdir() returns -EBUSY. See 5.1 Also.
248 268
269Charges recorded in swap information is not updated at removal of cgroup.
270Recorded information is discarded and a cgroup which uses swap (swapcache)
271will be charged as a new owner of it.
272
273
2495. Misc. interfaces. 2745. Misc. interfaces.
250 275
2515.1 force_empty 2765.1 force_empty