diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2009-01-07 21:08:35 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-08 11:31:10 -0500 |
commit | b5a84319a4343a0db753436fd8147e61eaafa7ea (patch) | |
tree | 5faae671b431b50a32a2d8c7a57cc9361d8f336d /include/linux/swap.h | |
parent | 544122e5e0ee27d5aac4a441f7746712afbf248c (diff) |
memcg: fix shmem's swap accounting
Now, you can see following even when swap accounting is enabled.
1. Create Group 01, and 02.
2. allocate a "file" on tmpfs by a task under 01.
3. swap out the "file" (by memory pressure)
4. Read "file" from a task in group 02.
5. the charge of "file" is moved to group 02.
This is not ideal behavior. This is because SwapCache which was loaded
by read-ahead is not taken into account..
This is a patch to fix shmem's swapcache behavior.
- remove mem_cgroup_cache_charge_swapin().
- Add SwapCache handler routine to mem_cgroup_cache_charge().
By this, shmem's file cache is charged at add_to_page_cache()
with GFP_NOWAIT.
- pass the page of swapcache to shrink_mem_cgroup.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/swap.h')
-rw-r--r-- | include/linux/swap.h | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4ccca25d0f05..d30215578877 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h | |||
@@ -335,16 +335,8 @@ static inline void disable_swap_token(void) | |||
335 | } | 335 | } |
336 | 336 | ||
337 | #ifdef CONFIG_CGROUP_MEM_RES_CTLR | 337 | #ifdef CONFIG_CGROUP_MEM_RES_CTLR |
338 | extern int mem_cgroup_cache_charge_swapin(struct page *page, | ||
339 | struct mm_struct *mm, gfp_t mask, bool locked); | ||
340 | extern void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent); | 338 | extern void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent); |
341 | #else | 339 | #else |
342 | static inline | ||
343 | int mem_cgroup_cache_charge_swapin(struct page *page, | ||
344 | struct mm_struct *mm, gfp_t mask, bool locked) | ||
345 | { | ||
346 | return 0; | ||
347 | } | ||
348 | static inline void | 340 | static inline void |
349 | mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) | 341 | mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) |
350 | { | 342 | { |