diff options
author | Dave Hansen <dave.hansen@linux.intel.com> | 2014-06-04 19:06:37 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-06-04 19:53:56 -0400 |
commit | 8eae1492675d0ffc12189f8db573624413232e15 (patch) | |
tree | 922b6d5b48a6bdbfc8d9d6a1a780147618fbb6e0 /mm/slub.c | |
parent | 9a02d699935c9acdfefe431bbc33771d1d87da7f (diff) |
mm: slub: fix ALLOC_SLOWPATH stat
There used to be only one path out of __slab_alloc(), and ALLOC_SLOWPATH
got bumped in that exit path. Now there are two, and a bunch of gotos.
ALLOC_SLOWPATH can now get set more than once during a single call to
__slab_alloc() which is pretty bogus. Here's the sequence:
1. Enter __slab_alloc(), fall through all the way to the
stat(s, ALLOC_SLOWPATH);
2. hit 'if (!freelist)', and bump DEACTIVATE_BYPASS, jump to
new_slab (goto #1)
3. Hit 'if (c->partial)', bump CPU_PARTIAL_ALLOC, goto redo
(goto #2)
4. Fall through in the same path we did before all the way to
stat(s, ALLOC_SLOWPATH)
5. bump ALLOC_REFILL stat, then return
Doing this is obviously bogus. It keeps us from being able to
accurately compare ALLOC_SLOWPATH vs. ALLOC_FASTPATH. It also means
that the total number of allocs always exceeds the total number of
frees.
This patch moves stat(s, ALLOC_SLOWPATH) to be called from the same
place that __slab_alloc() is. This makes it much less likely that
ALLOC_SLOWPATH will get botched again in the spaghetti-code inside
__slab_alloc().
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 8 |
1 files changed, 3 insertions, 5 deletions
@@ -2326,8 +2326,6 @@ redo: | |||
2326 | if (freelist) | 2326 | if (freelist) |
2327 | goto load_freelist; | 2327 | goto load_freelist; |
2328 | 2328 | ||
2329 | stat(s, ALLOC_SLOWPATH); | ||
2330 | |||
2331 | freelist = get_freelist(s, page); | 2329 | freelist = get_freelist(s, page); |
2332 | 2330 | ||
2333 | if (!freelist) { | 2331 | if (!freelist) { |
@@ -2432,10 +2430,10 @@ redo: | |||
2432 | 2430 | ||
2433 | object = c->freelist; | 2431 | object = c->freelist; |
2434 | page = c->page; | 2432 | page = c->page; |
2435 | if (unlikely(!object || !node_match(page, node))) | 2433 | if (unlikely(!object || !node_match(page, node))) { |
2436 | object = __slab_alloc(s, gfpflags, node, addr, c); | 2434 | object = __slab_alloc(s, gfpflags, node, addr, c); |
2437 | 2435 | stat(s, ALLOC_SLOWPATH); | |
2438 | else { | 2436 | } else { |
2439 | void *next_object = get_freepointer_safe(s, object); | 2437 | void *next_object = get_freepointer_safe(s, object); |
2440 | 2438 | ||
2441 | /* | 2439 | /* |