Commit 998d39cb authored by Cody P Schafer's avatar Cody P Schafer Committed by Linus Torvalds

mm/page_alloc: protect pcp->batch accesses with ACCESS_ONCE

pcp->batch could change at any point, avoid relying on it being a stable
value.
Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8d7a8fa9
...@@ -1182,10 +1182,12 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) ...@@ -1182,10 +1182,12 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{ {
unsigned long flags; unsigned long flags;
int to_drain; int to_drain;
unsigned long batch;
local_irq_save(flags); local_irq_save(flags);
if (pcp->count >= pcp->batch) batch = ACCESS_ONCE(pcp->batch);
to_drain = pcp->batch; if (pcp->count >= batch)
to_drain = batch;
else else
to_drain = pcp->count; to_drain = pcp->count;
if (to_drain > 0) { if (to_drain > 0) {
...@@ -1353,8 +1355,9 @@ void free_hot_cold_page(struct page *page, int cold) ...@@ -1353,8 +1355,9 @@ void free_hot_cold_page(struct page *page, int cold)
list_add(&page->lru, &pcp->lists[migratetype]); list_add(&page->lru, &pcp->lists[migratetype]);
pcp->count++; pcp->count++;
if (pcp->count >= pcp->high) { if (pcp->count >= pcp->high) {
free_pcppages_bulk(zone, pcp->batch, pcp); unsigned long batch = ACCESS_ONCE(pcp->batch);
pcp->count -= pcp->batch; free_pcppages_bulk(zone, batch, pcp);
pcp->count -= batch;
} }
out: out:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment