Skip to content

Commit

Permalink
mm: slub: fix ALLOC_SLOWPATH stat
Browse files Browse the repository at this point in the history
There used to be only one path out of __slab_alloc(), and ALLOC_SLOWPATH
got bumped in that exit path.  Now there are two, and a bunch of gotos.
ALLOC_SLOWPATH can now get set more than once during a single call to
__slab_alloc() which is pretty bogus.  Here's the sequence:

1. Enter __slab_alloc(), fall through all the way to the
   stat(s, ALLOC_SLOWPATH);
2. hit 'if (!freelist)', and bump DEACTIVATE_BYPASS, jump to
   new_slab (goto #1)
3. Hit 'if (c->partial)', bump CPU_PARTIAL_ALLOC, goto redo
   (goto #2)
4. Fall through in the same path we did before all the way to
   stat(s, ALLOC_SLOWPATH)
5. bump ALLOC_REFILL stat, then return

Doing this is obviously bogus.  It keeps us from being able to
accurately compare ALLOC_SLOWPATH vs.  ALLOC_FASTPATH.  It also means
that the total number of allocs always exceeds the total number of
frees.

This patch moves stat(s, ALLOC_SLOWPATH) to be called from the same
place that __slab_alloc() is.  This makes it much less likely that
ALLOC_SLOWPATH will get botched again in the spaghetti-code inside
__slab_alloc().

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
hansendc authored and torvalds committed Jun 4, 2014
1 parent 9a02d69 commit 8eae149
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -2326,8 +2326,6 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
if (freelist)
goto load_freelist;

stat(s, ALLOC_SLOWPATH);

freelist = get_freelist(s, page);

if (!freelist) {
Expand Down Expand Up @@ -2432,10 +2430,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,

object = c->freelist;
page = c->page;
if (unlikely(!object || !node_match(page, node)))
if (unlikely(!object || !node_match(page, node))) {
object = __slab_alloc(s, gfpflags, node, addr, c);

else {
stat(s, ALLOC_SLOWPATH);
} else {
void *next_object = get_freepointer_safe(s, object);

/*
Expand Down

0 comments on commit 8eae149

Please sign in to comment.