Commit 90b3b976 authored by Andrew Morton's avatar Andrew Morton Committed by Jaroslav Kysela

[PATCH] Fix off-by-one in the page allocator

From Hugh.

Be consistent in deciding when we are below the zone allocation
thresholds.
parent 36aed1f9
......@@ -464,7 +464,7 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
struct zone *z = zones[i];
min += z->pages_low;
if (z->free_pages > min ||
if (z->free_pages >= min ||
(!wait && z->free_pages >= z->pages_high)) {
page = buffered_rmqueue(z, order, cold);
if (page)
......@@ -487,7 +487,7 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
if (gfp_mask & __GFP_HIGH)
local_min >>= 2;
min += local_min;
if (z->free_pages > min ||
if (z->free_pages >= min ||
(!wait && z->free_pages >= z->pages_high)) {
page = buffered_rmqueue(z, order, cold);
if (page)
......@@ -525,7 +525,7 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
struct zone *z = zones[i];
min += z->pages_min;
if (z->free_pages > min ||
if (z->free_pages >= min ||
(!wait && z->free_pages >= z->pages_high)) {
page = buffered_rmqueue(z, order, cold);
if (page)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment