diff options
author | Vladimir Davydov <vdavydov@parallels.com> | 2014-12-12 19:55:13 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-12-13 15:42:47 -0500 |
commit | 4e701d7b37789d1aeb0015210b373912e5d30733 (patch) | |
tree | 38257203476057e32f32de241f0d6dc2b7e11891 /mm/memcontrol.c | |
parent | 900a38f027b37b55ebe157a0cb31de351b91e4e2 (diff) |
memcg: only check memcg_kmem_skip_account in __memcg_kmem_get_cache
__memcg_kmem_get_cache can recurse if it calls kmalloc (which it does if
the cgroup's kmem cache doesn't exist), because kmalloc may call
__memcg_kmem_get_cache internally again. To avoid the recursion, we use
the task_struct->memcg_kmem_skip_account flag.
However, there's no need checking the flag in memcg_kmem_newpage_charge,
because there's no way how this function could result in recursion, if
called from memcg_kmem_get_cache. So let's remove the redundant code.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
-rw-r--r-- | mm/memcontrol.c | 28 |
1 files changed, 0 insertions, 28 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index bb8c237026cc..d9fab72da52e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c | |||
@@ -2905,34 +2905,6 @@ __memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **_memcg, int order) | |||
2905 | 2905 | ||
2906 | *_memcg = NULL; | 2906 | *_memcg = NULL; |
2907 | 2907 | ||
2908 | /* | ||
2909 | * Disabling accounting is only relevant for some specific memcg | ||
2910 | * internal allocations. Therefore we would initially not have such | ||
2911 | * check here, since direct calls to the page allocator that are | ||
2912 | * accounted to kmemcg (alloc_kmem_pages and friends) only happen | ||
2913 | * outside memcg core. We are mostly concerned with cache allocations, | ||
2914 | * and by having this test at memcg_kmem_get_cache, we are already able | ||
2915 | * to relay the allocation to the root cache and bypass the memcg cache | ||
2916 | * altogether. | ||
2917 | * | ||
2918 | * There is one exception, though: the SLUB allocator does not create | ||
2919 | * large order caches, but rather service large kmallocs directly from | ||
2920 | * the page allocator. Therefore, the following sequence when backed by | ||
2921 | * the SLUB allocator: | ||
2922 | * | ||
2923 | * memcg_stop_kmem_account(); | ||
2924 | * kmalloc(<large_number>) | ||
2925 | * memcg_resume_kmem_account(); | ||
2926 | * | ||
2927 | * would effectively ignore the fact that we should skip accounting, | ||
2928 | * since it will drive us directly to this function without passing | ||
2929 | * through the cache selector memcg_kmem_get_cache. Such large | ||
2930 | * allocations are extremely rare but can happen, for instance, for the | ||
2931 | * cache arrays. We bring this test here. | ||
2932 | */ | ||
2933 | if (current->memcg_kmem_skip_account) | ||
2934 | return true; | ||
2935 | |||
2936 | memcg = get_mem_cgroup_from_mm(current->mm); | 2908 | memcg = get_mem_cgroup_from_mm(current->mm); |
2937 | 2909 | ||
2938 | if (!memcg_kmem_is_active(memcg)) { | 2910 | if (!memcg_kmem_is_active(memcg)) { |