aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/controllers/memcg_test.txt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/controllers/memcg_test.txt')
-rw-r--r--Documentation/controllers/memcg_test.txt311
1 files changed, 311 insertions, 0 deletions
diff --git a/Documentation/controllers/memcg_test.txt b/Documentation/controllers/memcg_test.txt
new file mode 100644
index 000000000000..c91f69b0b549
--- /dev/null
+++ b/Documentation/controllers/memcg_test.txt
@@ -0,0 +1,311 @@
1Memory Resource Controller(Memcg) Implementation Memo.
2Last Updated: 2008/12/10
3Base Kernel Version: based on 2.6.28-rc7-mm.
4
5Because VM is getting complex (one of reasons is memcg...), memcg's behavior
6is complex. This is a document for memcg's internal behavior.
7Please note that implementation details can be changed.
8
9(*) Topics on API should be in Documentation/controllers/memory.txt)
10
110. How to record usage ?
12 2 objects are used.
13
14 page_cgroup ....an object per page.
15 Allocated at boot or memory hotplug. Freed at memory hot removal.
16
17 swap_cgroup ... an entry per swp_entry.
18 Allocated at swapon(). Freed at swapoff().
19
20 The page_cgroup has USED bit and double count against a page_cgroup never
21 occurs. swap_cgroup is used only when a charged page is swapped-out.
22
231. Charge
24
25 a page/swp_entry may be charged (usage += PAGE_SIZE) at
26
27 mem_cgroup_newpage_charge()
28 Called at new page fault and Copy-On-Write.
29
30 mem_cgroup_try_charge_swapin()
31 Called at do_swap_page() (page fault on swap entry) and swapoff.
32 Followed by charge-commit-cancel protocol. (With swap accounting)
33 At commit, a charge recorded in swap_cgroup is removed.
34
35 mem_cgroup_cache_charge()
36 Called at add_to_page_cache()
37
38 mem_cgroup_cache_charge_swapin()
39 Called at shmem's swapin.
40
41 mem_cgroup_prepare_migration()
42 Called before migration. "extra" charge is done and followed by
43 charge-commit-cancel protocol.
44 At commit, charge against oldpage or newpage will be committed.
45
462. Uncharge
47 a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by
48
49 mem_cgroup_uncharge_page()
50 Called when an anonymous page is fully unmapped. I.e., mapcount goes
51 to 0. If the page is SwapCache, uncharge is delayed until
52 mem_cgroup_uncharge_swapcache().
53
54 mem_cgroup_uncharge_cache_page()
55 Called when a page-cache is deleted from radix-tree. If the page is
56 SwapCache, uncharge is delayed until mem_cgroup_uncharge_swapcache().
57
58 mem_cgroup_uncharge_swapcache()
59 Called when SwapCache is removed from radix-tree. The charge itself
60 is moved to swap_cgroup. (If mem+swap controller is disabled, no
61 charge to swap occurs.)
62
63 mem_cgroup_uncharge_swap()
64 Called when swp_entry's refcnt goes down to 0. A charge against swap
65 disappears.
66
67 mem_cgroup_end_migration(old, new)
68 At success of migration old is uncharged (if necessary), a charge
69 to new page is committed. At failure, charge to old page is committed.
70
713. charge-commit-cancel
72 In some case, we can't know this "charge" is valid or not at charging
73 (because of races).
74 To handle such case, there are charge-commit-cancel functions.
75 mem_cgroup_try_charge_XXX
76 mem_cgroup_commit_charge_XXX
77 mem_cgroup_cancel_charge_XXX
78 these are used in swap-in and migration.
79
80 At try_charge(), there are no flags to say "this page is charged".
81 at this point, usage += PAGE_SIZE.
82
83 At commit(), the function checks the page should be charged or not
84 and set flags or avoid charging.(usage -= PAGE_SIZE)
85
86 At cancel(), simply usage -= PAGE_SIZE.
87
88Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
89
904. Anonymous
91 Anonymous page is newly allocated at
92 - page fault into MAP_ANONYMOUS mapping.
93 - Copy-On-Write.
94 It is charged right after it's allocated before doing any page table
95 related operations. Of course, it's uncharged when another page is used
96 for the fault address.
97
98 At freeing anonymous page (by exit() or munmap()), zap_pte() is called
99 and pages for ptes are freed one by one.(see mm/memory.c). Uncharges
100 are done at page_remove_rmap() when page_mapcount() goes down to 0.
101
102 Another page freeing is by page-reclaim (vmscan.c) and anonymous
103 pages are swapped out. In this case, the page is marked as
104 PageSwapCache(). uncharge() routine doesn't uncharge the page marked
105 as SwapCache(). It's delayed until __delete_from_swap_cache().
106
107 4.1 Swap-in.
108 At swap-in, the page is taken from swap-cache. There are 2 cases.
109
110 (a) If the SwapCache is newly allocated and read, it has no charges.
111 (b) If the SwapCache has been mapped by processes, it has been
112 charged already.
113
114 In case (a), we charge it. In case (b), we don't charge it.
115 (But racy state between (a) and (b) exists. We do check it.)
116 At charging, a charge recorded in swap_cgroup is moved to page_cgroup.
117
118 4.2 Swap-out.
119 At swap-out, typical state transition is below.
120
121 (a) add to swap cache. (marked as SwapCache)
122 swp_entry's refcnt += 1.
123 (b) fully unmapped.
124 swp_entry's refcnt += # of ptes.
125 (c) write back to swap.
126 (d) delete from swap cache. (remove from SwapCache)
127 swp_entry's refcnt -= 1.
128
129
130 At (b), the page is marked as SwapCache and not uncharged.
131 At (d), the page is removed from SwapCache and a charge in page_cgroup
132 is moved to swap_cgroup.
133
134 Finally, at task exit,
135 (e) zap_pte() is called and swp_entry's refcnt -=1 -> 0.
136 Here, a charge in swap_cgroup disappears.
137
1385. Page Cache
139 Page Cache is charged at
140 - add_to_page_cache_locked().
141
142 uncharged at
143 - __remove_from_page_cache().
144
145 The logic is very clear. (About migration, see below)
146 Note: __remove_from_page_cache() is called by remove_from_page_cache()
147 and __remove_mapping().
148
1496. Shmem(tmpfs) Page Cache
150 Memcg's charge/uncharge have special handlers of shmem. The best way
151 to understand shmem's page state transition is to read mm/shmem.c.
152 But brief explanation of the behavior of memcg around shmem will be
153 helpful to understand the logic.
154
155 Shmem's page (just leaf page, not direct/indirect block) can be on
156 - radix-tree of shmem's inode.
157 - SwapCache.
158 - Both on radix-tree and SwapCache. This happens at swap-in
159 and swap-out,
160
161 It's charged when...
162 - A new page is added to shmem's radix-tree.
163 - A swp page is read. (move a charge from swap_cgroup to page_cgroup)
164 It's uncharged when
165 - A page is removed from radix-tree and not SwapCache.
166 - When SwapCache is removed, a charge is moved to swap_cgroup.
167 - When swp_entry's refcnt goes down to 0, a charge in swap_cgroup
168 disappears.
169
1707. Page Migration
171 One of the most complicated functions is page-migration-handler.
172 Memcg has 2 routines. Assume that we are migrating a page's contents
173 from OLDPAGE to NEWPAGE.
174
175 Usual migration logic is..
176 (a) remove the page from LRU.
177 (b) allocate NEWPAGE (migration target)
178 (c) lock by lock_page().
179 (d) unmap all mappings.
180 (e-1) If necessary, replace entry in radix-tree.
181 (e-2) move contents of a page.
182 (f) map all mappings again.
183 (g) pushback the page to LRU.
184 (-) OLDPAGE will be freed.
185
186 Before (g), memcg should complete all necessary charge/uncharge to
187 NEWPAGE/OLDPAGE.
188
189 The point is....
190 - If OLDPAGE is anonymous, all charges will be dropped at (d) because
191 try_to_unmap() drops all mapcount and the page will not be
192 SwapCache.
193
194 - If OLDPAGE is SwapCache, charges will be kept at (g) because
195 __delete_from_swap_cache() isn't called at (e-1)
196
197 - If OLDPAGE is page-cache, charges will be kept at (g) because
198 remove_from_swap_cache() isn't called at (e-1)
199
200 memcg provides following hooks.
201
202 - mem_cgroup_prepare_migration(OLDPAGE)
203 Called after (b) to account a charge (usage += PAGE_SIZE) against
204 memcg which OLDPAGE belongs to.
205
206 - mem_cgroup_end_migration(OLDPAGE, NEWPAGE)
207 Called after (f) before (g).
208 If OLDPAGE is used, commit OLDPAGE again. If OLDPAGE is already
209 charged, a charge by prepare_migration() is automatically canceled.
210 If NEWPAGE is used, commit NEWPAGE and uncharge OLDPAGE.
211
212 But zap_pte() (by exit or munmap) can be called while migration,
213 we have to check if OLDPAGE/NEWPAGE is a valid page after commit().
214
2158. LRU
216 Each memcg has its own private LRU. Now, it's handling is under global
217 VM's control (means that it's handled under global zone->lru_lock).
218 Almost all routines around memcg's LRU is called by global LRU's
219 list management functions under zone->lru_lock().
220
221 A special function is mem_cgroup_isolate_pages(). This scans
222 memcg's private LRU and call __isolate_lru_page() to extract a page
223 from LRU.
224 (By __isolate_lru_page(), the page is removed from both of global and
225 private LRU.)
226
227
2289. Typical Tests.
229
230 Tests for racy cases.
231
232 9.1 Small limit to memcg.
233 When you do test to do racy case, it's good test to set memcg's limit
234 to be very small rather than GB. Many races found in the test under
235 xKB or xxMB limits.
236 (Memory behavior under GB and Memory behavior under MB shows very
237 different situation.)
238
239 9.2 Shmem
240 Historically, memcg's shmem handling was poor and we saw some amount
241 of troubles here. This is because shmem is page-cache but can be
242 SwapCache. Test with shmem/tmpfs is always good test.
243
244 9.3 Migration
245 For NUMA, migration is an another special case. To do easy test, cpuset
246 is useful. Following is a sample script to do migration.
247
248 mount -t cgroup -o cpuset none /opt/cpuset
249
250 mkdir /opt/cpuset/01
251 echo 1 > /opt/cpuset/01/cpuset.cpus
252 echo 0 > /opt/cpuset/01/cpuset.mems
253 echo 1 > /opt/cpuset/01/cpuset.memory_migrate
254 mkdir /opt/cpuset/02
255 echo 1 > /opt/cpuset/02/cpuset.cpus
256 echo 1 > /opt/cpuset/02/cpuset.mems
257 echo 1 > /opt/cpuset/02/cpuset.memory_migrate
258
259 In above set, when you moves a task from 01 to 02, page migration to
260 node 0 to node 1 will occur. Following is a script to migrate all
261 under cpuset.
262 --
263 move_task()
264 {
265 for pid in $1
266 do
267 /bin/echo $pid >$2/tasks 2>/dev/null
268 echo -n $pid
269 echo -n " "
270 done
271 echo END
272 }
273
274 G1_TASK=`cat ${G1}/tasks`
275 G2_TASK=`cat ${G2}/tasks`
276 move_task "${G1_TASK}" ${G2} &
277 --
278 9.4 Memory hotplug.
279 memory hotplug test is one of good test.
280 to offline memory, do following.
281 # echo offline > /sys/devices/system/memory/memoryXXX/state
282 (XXX is the place of memory)
283 This is an easy way to test page migration, too.
284
285 9.5 mkdir/rmdir
286 When using hierarchy, mkdir/rmdir test should be done.
287 Use tests like the following.
288
289 echo 1 >/opt/cgroup/01/memory/use_hierarchy
290 mkdir /opt/cgroup/01/child_a
291 mkdir /opt/cgroup/01/child_b
292
293 set limit to 01.
294 add limit to 01/child_b
295 run jobs under child_a and child_b
296
297 create/delete following groups at random while jobs are running.
298 /opt/cgroup/01/child_a/child_aa
299 /opt/cgroup/01/child_b/child_bb
300 /opt/cgroup/01/child_c
301
302 running new jobs in new group is also good.
303
304 9.6 Mount with other subsystems.
305 Mounting with other subsystems is a good test because there is a
306 race and lock dependency with other cgroup subsystems.
307
308 example)
309 # mount -t cgroup none /cgroup -t cpuset,memory,cpu,devices
310
311 and do task move, mkdir, rmdir etc...under this.