Commit 2a16e3f4 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] Reclaim slab during zone reclaim

If large amounts of zone memory are used by empty slabs then zone_reclaim
becomes uneffective.  This patch shakes the slab a bit.

The problem with this patch is that the slab reclaim is not containable to a
zone.  Thus slab reclaim may affect the whole system and be extremely slow.
This also means that we cannot determine how many pages were freed in this
zone.  Thus we need to go off node for at least one allocation.

The functionality is disabled by default.

We could modify the shrinkers to take a zone parameter but that would be quite
invasive.  Better ideas are welcome.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 1b2ffb78
...@@ -137,6 +137,7 @@ This is value ORed together of ...@@ -137,6 +137,7 @@ This is value ORed together of
1 = Zone reclaim on 1 = Zone reclaim on
2 = Zone reclaim writes dirty pages out 2 = Zone reclaim writes dirty pages out
4 = Zone reclaim swaps pages 4 = Zone reclaim swaps pages
8 = Also do a global slab reclaim pass
zone_reclaim_mode is set during bootup to 1 if it is determined that pages zone_reclaim_mode is set during bootup to 1 if it is determined that pages
from remote zones will cause a measurable performance reduction. The from remote zones will cause a measurable performance reduction. The
...@@ -160,6 +161,11 @@ Allowing regular swap effectively restricts allocations to the local ...@@ -160,6 +161,11 @@ Allowing regular swap effectively restricts allocations to the local
node unless explicitly overridden by memory policies or cpuset node unless explicitly overridden by memory policies or cpuset
configurations. configurations.
It may be advisable to allow slab reclaim if the system makes heavy
use of files and builds up large slab caches. However, the slab
shrink operation is global, may take a long time and free slabs
in all nodes of the system.
================================================================ ================================================================
zone_reclaim_interval: zone_reclaim_interval:
......
...@@ -1596,6 +1596,7 @@ int zone_reclaim_mode __read_mostly; ...@@ -1596,6 +1596,7 @@ int zone_reclaim_mode __read_mostly;
#define RECLAIM_ZONE (1<<0) /* Run shrink_cache on the zone */ #define RECLAIM_ZONE (1<<0) /* Run shrink_cache on the zone */
#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
#define RECLAIM_SWAP (1<<2) /* Swap pages out during reclaim */ #define RECLAIM_SWAP (1<<2) /* Swap pages out during reclaim */
#define RECLAIM_SLAB (1<<3) /* Do a global slab shrink if the zone is out of memory */
/* /*
* Mininum time between zone reclaim scans * Mininum time between zone reclaim scans
...@@ -1666,6 +1667,19 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) ...@@ -1666,6 +1667,19 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
} while (sc.nr_reclaimed < nr_pages && sc.priority > 0); } while (sc.nr_reclaimed < nr_pages && sc.priority > 0);
if (sc.nr_reclaimed < nr_pages && (zone_reclaim_mode & RECLAIM_SLAB)) {
/*
* shrink_slab does not currently allow us to determine
* how many pages were freed in the zone. So we just
* shake the slab and then go offnode for a single allocation.
*
* shrink_slab will free memory on all zones and may take
* a long time.
*/
shrink_slab(sc.nr_scanned, gfp_mask, order);
sc.nr_reclaimed = 1; /* Avoid getting the off node timeout */
}
p->reclaim_state = NULL; p->reclaim_state = NULL;
current->flags &= ~PF_MEMALLOC; current->flags &= ~PF_MEMALLOC;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment