Commit bf3f607a authored by Andrew Morton's avatar Andrew Morton Committed by Russell King

[PATCH] separation of direct-reclaim and kswapd functions

There is some lack of clarity in what kswapd does and what
direct-reclaim tasks do; try_to_free_pages() tries to service both
functions, and they are different.

- kswapd's role is to keep all zones on its node at

	zone->free_pages >= zone->pages_high.

  and to never stop as long as any zones do not meet that condition.

- A direct reclaimer's role is to try to free some pages from the
  zones which are suitable for this particular allocation request, and
  to return when that has been achieved, or when all the relevant zones
  are at

	zone->free_pages >= zone->pages_high.

The patch explicitly separates these two code paths; kswapd does not
run try_to_free_pages() any more.  kswapd should not be aware of zone
fallbacks.
parent fe66ad33
......@@ -62,7 +62,6 @@ struct zone {
spinlock_t lock;
unsigned long free_pages;
unsigned long pages_min, pages_low, pages_high;
int need_balance;
ZONE_PADDING(_pad1_)
......
......@@ -346,8 +346,6 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
}
}
classzone->need_balance = 1;
mb();
/* we're somewhat low on memory, failed to find what we needed */
for (i = 0; zones[i] != NULL; i++) {
struct zone *z = zones[i];
......@@ -873,7 +871,6 @@ void __init free_area_init_core(pg_data_t *pgdat,
spin_lock_init(&zone->lru_lock);
zone->zone_pgdat = pgdat;
zone->free_pages = 0;
zone->need_balance = 0;
INIT_LIST_HEAD(&zone->active_list);
INIT_LIST_HEAD(&zone->inactive_list);
atomic_set(&zone->refill_counter, 0);
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment