Commit 9f986d99 authored by Abel Wu's avatar Abel Wu Committed by Linus Torvalds

mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc

The ALLOC_SLOWPATH statistics is missing in bulk allocation now.  Fix it
by doing statistics in alloc slow path.
Signed-off-by: default avatarAbel Wu <wuyun.wu@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarPekka Enberg <penberg@kernel.org>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hewenliang <hewenliang4@huawei.com>
Cc: Hu Shiyuan <hushiyuan@huawei.com>
Link: http://lkml.kernel.org/r/20200811022427.1363-1-wuyun.wu@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent c270cf30
...@@ -2661,6 +2661,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, ...@@ -2661,6 +2661,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
void *freelist; void *freelist;
struct page *page; struct page *page;
stat(s, ALLOC_SLOWPATH);
page = c->page; page = c->page;
if (!page) { if (!page) {
/* /*
...@@ -2850,7 +2852,6 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, ...@@ -2850,7 +2852,6 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
page = c->page; page = c->page;
if (unlikely(!object || !node_match(page, node))) { if (unlikely(!object || !node_match(page, node))) {
object = __slab_alloc(s, gfpflags, node, addr, c); object = __slab_alloc(s, gfpflags, node, addr, c);
stat(s, ALLOC_SLOWPATH);
} else { } else {
void *next_object = get_freepointer_safe(s, object); void *next_object = get_freepointer_safe(s, object);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment