Commit de55c8b2 authored by Andrey Ryabinin's avatar Andrey Ryabinin Committed by Linus Torvalds

mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter

Commit 3a321d2a ("mm: change the call sites of numa statistics
items") separated NUMA counters from zone counters, but the
NUMA_INTERLEAVE_HIT call site wasn't updated to use the new interface.
So alloc_page_interleave() actually increments NR_ZONE_INACTIVE_FILE
instead of NUMA_INTERLEAVE_HIT.

Fix this by using __inc_numa_state() interface to increment
NUMA_INTERLEAVE_HIT.

Link: http://lkml.kernel.org/r/20171003191003.8573-1-aryabinin@virtuozzo.com
Fixes: 3a321d2a ("mm: change the call sites of numa statistics items")
Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
Cc: Kemi Wang <kemi.wang@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8a1ac5dc
...@@ -1920,8 +1920,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, ...@@ -1920,8 +1920,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
struct page *page; struct page *page;
page = __alloc_pages(gfp, order, nid); page = __alloc_pages(gfp, order, nid);
if (page && page_to_nid(page) == nid) if (page && page_to_nid(page) == nid) {
inc_zone_page_state(page, NUMA_INTERLEAVE_HIT); preempt_disable();
__inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
preempt_enable();
}
return page; return page;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment