diff options
author | Joonsoo Kim <iamjoonsoo.kim@lge.com> | 2016-10-27 20:46:18 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-10-27 21:43:42 -0400 |
commit | 86d9f48534e800e4d62cdc1b5aaf539f4c1d47d6 (patch) | |
tree | 0be877fe7dced4128efce07c15709deac322ca68 /mm/slab.c | |
parent | 18c2152d526e7956457fcdcbdf6d77ae2c663a26 (diff) |
mm/slab: fix kmemcg cache creation delayed issue
There is a bug report that SLAB makes extreme load average due to over
2000 kworker thread.
https://bugzilla.kernel.org/show_bug.cgi?id=172981
This issue is caused by kmemcg feature that try to create new set of
kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
synchronize_sched() and futher kmem_cache creation is also delayed since
kmem_cache creation is synchronized by a global slab_mutex lock. So,
the number of kworker that try to create kmem_cache increases quietly.
synchronize_sched() is for lockless access to node's shared array but
it's not needed when a new kmem_cache is created. So, this patch rules
out that case.
Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
Link: http://lkml.kernel.org/r/1475734855-4837-1-git-send-email-iamjoonsoo.kim@lge.com
Reported-by: Doug Smythies <dsmythies@telus.net>
Tested-by: Doug Smythies <dsmythies@telus.net>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r-- | mm/slab.c | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -966,7 +966,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, | |||
966 | * guaranteed to be valid until irq is re-enabled, because it will be | 966 | * guaranteed to be valid until irq is re-enabled, because it will be |
967 | * freed after synchronize_sched(). | 967 | * freed after synchronize_sched(). |
968 | */ | 968 | */ |
969 | if (force_change) | 969 | if (old_shared && force_change) |
970 | synchronize_sched(); | 970 | synchronize_sched(); |
971 | 971 | ||
972 | fail: | 972 | fail: |