diff options
author | Hugh Dickins <hughd@google.com> | 2012-05-29 18:06:38 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-29 19:22:22 -0400 |
commit | bde05d1ccd512696b09db9dd2e5f33ad19152605 (patch) | |
tree | affa2c836136cac6ec0e503ce8996670d385ebbb /mm/memcontrol.c | |
parent | 5ceb9ce6fe9462a298bb2cd5c9f1ca6cb80a0199 (diff) |
shmem: replace page if mapping excludes its zone
The GMA500 GPU driver uses GEM shmem objects, but with a new twist: the
backing RAM has to be below 4GB. Not a problem while the boards
supported only 4GB: but now Intel's D2700MUD boards support 8GB, and
their GMA3600 is managed by the GMA500 driver.
shmem/tmpfs has never pretended to support hardware restrictions on the
backing memory, but it might have appeared to do so before v3.1, and
even now it works fine until a page is swapped out then back in. When
read_cache_page_gfp() supplied a freshly allocated page for copy, that
compensated for whatever choice might have been made by earlier swapin
readahead; but swapoff was likely to destroy the illusion.
We'd like to continue to support GMA500, so now add a new
shmem_should_replace_page() check on the zone when about to move a page
from swapcache to filecache (in swapin and swapoff cases), with
shmem_replace_page() to allocate and substitute a suitable page (given
gma500/gem.c's mapping_set_gfp_mask GFP_KERNEL | __GFP_DMA32).
This does involve a minor extension to mem_cgroup_replace_page_cache()
(the page may or may not have already been charged); and I've removed a
comment and call to mem_cgroup_uncharge_cache_page(), which in fact is
always a no-op while PageSwapCache.
Also removed optimization of an unlikely path in shmem_getpage_gfp(),
now that we need to check PageSwapCache more carefully (a racing caller
might already have made the copy). And at one point shmem_unuse_inode()
needs to use the hitherto private page_swapcount(), to guard against
racing with inode eviction.
It would make sense to extend shmem_should_replace_page(), to cover
cpuset and NUMA mempolicy restrictions too, but set that aside for now:
needs a cleanup of shmem mempolicy handling, and more testing, and ought
to handle swap faults in do_swap_page() as well as shmem.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Stephane Marchesin <marcheu@chromium.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rob Clark <rob.clark@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
-rw-r--r-- | mm/memcontrol.c | 17 |
1 files changed, 13 insertions, 4 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4f71219cc53e..d7ce417cae7c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c | |||
@@ -3373,7 +3373,7 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg, | |||
3373 | void mem_cgroup_replace_page_cache(struct page *oldpage, | 3373 | void mem_cgroup_replace_page_cache(struct page *oldpage, |
3374 | struct page *newpage) | 3374 | struct page *newpage) |
3375 | { | 3375 | { |
3376 | struct mem_cgroup *memcg; | 3376 | struct mem_cgroup *memcg = NULL; |
3377 | struct page_cgroup *pc; | 3377 | struct page_cgroup *pc; |
3378 | enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE; | 3378 | enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE; |
3379 | 3379 | ||
@@ -3383,11 +3383,20 @@ void mem_cgroup_replace_page_cache(struct page *oldpage, | |||
3383 | pc = lookup_page_cgroup(oldpage); | 3383 | pc = lookup_page_cgroup(oldpage); |
3384 | /* fix accounting on old pages */ | 3384 | /* fix accounting on old pages */ |
3385 | lock_page_cgroup(pc); | 3385 | lock_page_cgroup(pc); |
3386 | memcg = pc->mem_cgroup; | 3386 | if (PageCgroupUsed(pc)) { |
3387 | mem_cgroup_charge_statistics(memcg, false, -1); | 3387 | memcg = pc->mem_cgroup; |
3388 | ClearPageCgroupUsed(pc); | 3388 | mem_cgroup_charge_statistics(memcg, false, -1); |
3389 | ClearPageCgroupUsed(pc); | ||
3390 | } | ||
3389 | unlock_page_cgroup(pc); | 3391 | unlock_page_cgroup(pc); |
3390 | 3392 | ||
3393 | /* | ||
3394 | * When called from shmem_replace_page(), in some cases the | ||
3395 | * oldpage has already been charged, and in some cases not. | ||
3396 | */ | ||
3397 | if (!memcg) | ||
3398 | return; | ||
3399 | |||
3391 | if (PageSwapBacked(oldpage)) | 3400 | if (PageSwapBacked(oldpage)) |
3392 | type = MEM_CGROUP_CHARGE_TYPE_SHMEM; | 3401 | type = MEM_CGROUP_CHARGE_TYPE_SHMEM; |
3393 | 3402 | ||