Commit 8913d55b authored by Ingo Molnar's avatar Ingo Molnar Committed by Linus Torvalds

[PATCH] i386 virtual memory layout rework

  Rework the i386 mm layout to allow applications to allocate more virtual
  memory, and larger contiguous chunks.


  - the patch is compatible with existing architectures that either make
    use of HAVE_ARCH_UNMAPPED_AREA or use the default mmap() allocator - there
    is no change in behavior.

  - 64-bit architectures can use the same mechanism to clean up 32-bit
    compatibility layouts: by defining HAVE_ARCH_PICK_MMAP_LAYOUT and
    providing a arch_pick_mmap_layout() function - which can then decide
    between various mmap() layout functions.

  - I also introduced a new personality bit (ADDR_COMPAT_LAYOUT) to signal
    older binaries that dont have PT_GNU_STACK.  x86 uses this to revert back
    to the stock layout.  I also changed x86 to not clear the personality bits
    upon exec(), like x86-64 already does.

  - once every architecture that uses HAVE_ARCH_UNMAPPED_AREA has defined
    its arch_pick_mmap_layout() function, we can get rid of
    HAVE_ARCH_UNMAPPED_AREA altogether, as a final cleanup.

  the new layout generation function (__get_unmapped_area()) got significant
  testing in FC1/2, so i'm pretty confident it's robust.


  Compiles & boots fine on an 'old' and on a 'new' x86 distro as well.

  The two known breakages were:

     http://www.redhatconfig.com/msg/67248.html

     [ 'cyzload' third-party utility broke. ]

     http://www.zipworld.com/au/~akpm/dde.tar.gz

     [ your editor broke :-) ]

  both were caused by application bugs that did:

	int ret = malloc();

	if (ret <= 0)
		failure;

  such bugs are easy to spot if they happen, and if it happens it's possible
  to work it around immediately without having to change the binary, via the
  setarch patch.

  No other application has been found to be affected, and this particular
  change got pretty wide coverage already over RHEL3 and exec-shield, it's in
  use for more than a year.


  The setarch utility can be used to trigger the compatibility layout on
  x86, the following version has been patched to take the `-L' option:

 	http://people.redhat.com/mingo/flexible-mmap/setarch-1.4-2.tar.gz

  "setarch -L i386 <command>" will run the command with the old layout.

From: Hugh Dickins <hugh@veritas.com>

  The problem is in the flexible mmap patch: arch_get_unmapped_area_topdown
  is liable to give your mmap vm_start above TASK_SIZE with vm_end wrapped;
  which is confusing, and ends up as that BUG_ON(mm->map_count).

  The patch below stops that behaviour, but it's not the full solution:
  wilson_mmap_test -s 1000 then simply cannot allocate memory for the large
  mmap, whereas it works fine non-top-down.

  I think it's wrong to interpret a large or rlim_infinite stack rlimit as
  an inviolable request to reserve that much for the stack: it makes much less
  VM available than bottom up, not what was intended.  Perhaps top down should
  go bottom up (instead of belly up) when it fails - but I'd probably better
  leave that to Ingo.

  Or perhaps the default should place stack below text (as WLI suggested and
  ELF intended, with its text defaulting to 0x08048000, small progs sharing
  page table between stack and text and data); with a further personality for
  those needing bigger stack.

From: Ingo Molnar <mingo@elte.hu>

  - fall back to the bottom-up layout if the stack can grow unlimited (if
  the stack ulimit has been set to RLIM_INFINITY)

  - try the bottom-up allocator if the top-down allocator fails - this can
  utilize the hole between the true bottom of the stack and its ulimit, as a
  last-resort effort.
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent af6e050a
......@@ -2,7 +2,7 @@
# Makefile for the linux i386-specific parts of the memory manager.
#
obj-y := init.o pgtable.o fault.o ioremap.o extable.o pageattr.o
obj-y := init.o pgtable.o fault.o ioremap.o extable.o pageattr.o mmap.o
obj-$(CONFIG_DISCONTIGMEM) += discontig.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
......
/*
* linux/arch/i386/mm/mmap.c
*
* flexible mmap layout support
*
* Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*
* Started by Ingo Molnar <mingo@elte.hu>
*/
#include <linux/personality.h>
#include <linux/mm.h>
/*
* Top of mmap area (just below the process stack).
*
* Leave an at least ~128 MB hole.
*/
#define MIN_GAP (128*1024*1024)
#define MAX_GAP (TASK_SIZE/6*5)
static inline unsigned long mmap_base(struct mm_struct *mm)
{
unsigned long gap = current->rlim[RLIMIT_STACK].rlim_cur;
if (gap < MIN_GAP)
gap = MIN_GAP;
else if (gap > MAX_GAP)
gap = MAX_GAP;
return TASK_SIZE - (gap & PAGE_MASK);
}
/*
* This function, called very early during the creation of a new
* process VM image, sets up which VM layout function to use:
*/
void arch_pick_mmap_layout(struct mm_struct *mm)
{
/*
* Fall back to the standard layout if the personality
* bit is set, or if the expected stack growth is unlimited:
*/
if ((current->personality & ADDR_COMPAT_LAYOUT) ||
current->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY) {
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
} else {
mm->mmap_base = mmap_base(mm);
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
mm->unmap_area = arch_unmap_area_topdown;
}
}
......@@ -307,7 +307,7 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
(current->mm->start_data = N_DATADDR(ex));
current->mm->brk = ex.a_bss +
(current->mm->start_brk = N_BSSADDR(ex));
current->mm->free_area_cache = TASK_UNMAPPED_BASE;
current->mm->free_area_cache = current->mm->mmap_base;
current->mm->rss = 0;
current->mm->mmap = NULL;
......
......@@ -706,7 +706,7 @@ static int load_elf_binary(struct linux_binprm * bprm, struct pt_regs * regs)
/* Do this so that we can load the interpreter, if need be. We will
change some of these later */
current->mm->rss = 0;
current->mm->free_area_cache = TASK_UNMAPPED_BASE;
current->mm->free_area_cache = current->mm->mmap_base;
retval = setup_arg_pages(bprm, executable_stack);
if (retval < 0) {
send_sig(SIGKILL, current, 0);
......
......@@ -546,6 +546,7 @@ static int exec_mmap(struct mm_struct *mm)
tsk->active_mm = mm;
activate_mm(active_mm, mm);
task_unlock(tsk);
arch_pick_mmap_layout(mm);
if (old_mm) {
if (active_mm != old_mm) BUG();
mmput(old_mm);
......
......@@ -297,6 +297,8 @@ extern unsigned int mca_pentium_flag;
*/
#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 3))
#define HAVE_ARCH_PICK_MMAP_LAYOUT
/*
* Size of io_bitmap.
*/
......
......@@ -30,6 +30,7 @@ extern int abi_fake_utsname;
*/
enum {
MMAP_PAGE_ZERO = 0x0100000,
ADDR_COMPAT_LAYOUT = 0x0200000,
READ_IMPLIES_EXEC = 0x0400000,
ADDR_LIMIT_32BIT = 0x0800000,
SHORT_INODE = 0x1000000,
......
......@@ -189,10 +189,26 @@ extern int sysctl_max_map_count;
#include <linux/aio.h>
extern unsigned long
arch_get_unmapped_area(struct file *, unsigned long, unsigned long,
unsigned long, unsigned long);
extern unsigned long
arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags);
extern void arch_unmap_area(struct vm_area_struct *area);
extern void arch_unmap_area_topdown(struct vm_area_struct *area);
struct mm_struct {
struct vm_area_struct * mmap; /* list of VMAs */
struct rb_root mm_rb;
struct vm_area_struct * mmap_cache; /* last find_vma result */
unsigned long (*get_unmapped_area) (struct file *filp,
unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags);
void (*unmap_area) (struct vm_area_struct *area);
unsigned long mmap_base; /* base of mmap area */
unsigned long free_area_cache; /* first hole */
pgd_t * pgd;
atomic_t mm_users; /* How many users with user space? */
......@@ -999,6 +1015,17 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
#endif /* CONFIG_SMP */
#ifdef HAVE_ARCH_PICK_MMAP_LAYOUT
extern void arch_pick_mmap_layout(struct mm_struct *mm);
#else
static inline void arch_pick_mmap_layout(struct mm_struct *mm)
{
mm->mmap_base = TASK_UNMAPPED_BASE;
mm->get_unmapped_area = arch_get_unmapped_area;
mm->unmap_area = arch_unmap_area;
}
#endif
#endif /* __KERNEL__ */
#endif
......@@ -280,7 +280,7 @@ static inline int dup_mmap(struct mm_struct * mm, struct mm_struct * oldmm)
mm->locked_vm = 0;
mm->mmap = NULL;
mm->mmap_cache = NULL;
mm->free_area_cache = TASK_UNMAPPED_BASE;
mm->free_area_cache = oldmm->mmap_base;
mm->map_count = 0;
mm->rss = 0;
cpus_clear(mm->cpu_vm_mask);
......
......@@ -1018,7 +1018,7 @@ EXPORT_SYMBOL(do_mmap_pgoff);
* This function "knows" that -ENOMEM has the bits set.
*/
#ifndef HAVE_ARCH_UNMAPPED_AREA
static inline unsigned long
unsigned long
arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
......@@ -1062,12 +1062,116 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = vma->vm_end;
}
}
#else
extern unsigned long
arch_get_unmapped_area(struct file *, unsigned long, unsigned long,
unsigned long, unsigned long);
#endif
void arch_unmap_area(struct vm_area_struct *area)
{
/*
* Is this a new hole at the lowest possible address?
*/
if (area->vm_start >= TASK_UNMAPPED_BASE &&
area->vm_start < area->vm_mm->free_area_cache)
area->vm_mm->free_area_cache = area->vm_start;
}
/*
* This mmap-allocator allocates new areas top-down from below the
* stack's low limit (the base):
*/
unsigned long
arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
struct vm_area_struct *vma, *prev_vma;
struct mm_struct *mm = current->mm;
unsigned long base = mm->mmap_base, addr = addr0;
int first_time = 1;
/* requested length too big for entire address space */
if (len > TASK_SIZE)
return -ENOMEM;
/* dont allow allocations above current base */
if (mm->free_area_cache > base)
mm->free_area_cache = base;
/* requesting a specific address */
if (addr) {
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
(!vma || addr + len <= vma->vm_start))
return addr;
}
try_again:
/* make sure it can fit in the remaining address space */
if (mm->free_area_cache < len)
goto fail;
/* either no address requested or cant fit in requested address hole */
addr = (mm->free_area_cache - len) & PAGE_MASK;
do {
/*
* Lookup failure means no vma is above this address,
* i.e. return with success:
*/
if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
return addr;
/*
* new region fits between prev_vma->vm_end and
* vma->vm_start, use it:
*/
if (addr+len <= vma->vm_start &&
(!prev_vma || (addr >= prev_vma->vm_end)))
/* remember the address as a hint for next time */
return (mm->free_area_cache = addr);
else
/* pull free_area_cache down to the first hole */
if (mm->free_area_cache == vma->vm_end)
mm->free_area_cache = vma->vm_start;
/* try just below the current vma->vm_start */
addr = vma->vm_start-len;
} while (len <= vma->vm_start);
fail:
/*
* if hint left us with no space for the requested
* mapping then try again:
*/
if (first_time) {
mm->free_area_cache = base;
first_time = 0;
goto try_again;
}
/*
* A failed mmap() very likely causes application failure,
* so fall back to the bottom-up function here. This scenario
* can happen with large stack limits and large mmap()
* allocations.
*/
mm->free_area_cache = TASK_UNMAPPED_BASE;
addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
/*
* Restore the topdown base:
*/
mm->free_area_cache = base;
return addr;
}
void arch_unmap_area_topdown(struct vm_area_struct *area)
{
/*
* Is this a new hole at the highest possible address?
*/
if (area->vm_end > area->vm_mm->free_area_cache)
area->vm_mm->free_area_cache = area->vm_end;
}
unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
......@@ -1102,7 +1206,7 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
return file->f_op->get_unmapped_area(file, addr, len,
pgoff, flags);
return arch_get_unmapped_area(file, addr, len, pgoff, flags);
return current->mm->get_unmapped_area(file, addr, len, pgoff, flags);
}
EXPORT_SYMBOL(get_unmapped_area);
......@@ -1392,13 +1496,7 @@ static void unmap_vma(struct mm_struct *mm, struct vm_area_struct *area)
area->vm_mm->total_vm -= len >> PAGE_SHIFT;
if (area->vm_flags & VM_LOCKED)
area->vm_mm->locked_vm -= len >> PAGE_SHIFT;
/*
* Is this a new hole at the lowest possible address?
*/
if (area->vm_start >= TASK_UNMAPPED_BASE &&
area->vm_start < area->vm_mm->free_area_cache)
area->vm_mm->free_area_cache = area->vm_start;
area->vm_mm->unmap_area(area);
remove_vm_struct(area);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment