1. 30 Oct, 2005 40 commits
    • Christoph Hellwig's avatar
      [PATCH] hugetlbfs: move free_inodes accounting · 96527980
      Christoph Hellwig authored
      Move hugetlbfs accounting into ->alloc_inode / ->destroy_inode.  This keeps
      the code simpler, fixes a loeak where a failing inode allocation wouldn't
      decrement the counter and moves hugetlbfs_delete_inode and
      hugetlbfs_forget_inode closer to their generic counterparts.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      96527980
    • Hugh Dickins's avatar
      [PATCH] mm: update comments to pte lock · b8072f09
      Hugh Dickins authored
      Updated several references to page_table_lock in common code comments.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b8072f09
    • Hugh Dickins's avatar
      [PATCH] mm: fix rss and mmlist locking · f412ac08
      Hugh Dickins authored
      A couple of oddities were guarded by page_table_lock, no longer properly
      guarded when that is split.
      
      The mm_counters of file_rss and anon_rss: make those an atomic_t, or an
      atomic64_t if the architecture supports it, in such a case.  Definitions by
      courtesy of Christoph Lameter: who spent considerable effort on more scalable
      ways of counting, but found insufficient benefit in practice.
      
      And adding an mm with swap to the mmlist for swapoff: the list is well-
      guarded by its own lock, but the list_empty check now has to be repeated
      inside it.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      f412ac08
    • Hugh Dickins's avatar
      [PATCH] mm: split page table lock · 4c21e2f2
      Hugh Dickins authored
      Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
      a many-threaded application which concurrently initializes different parts of
      a large anonymous area.
      
      This patch corrects that, by using a separate spinlock per page table page, to
      guard the page table entries in that page, instead of using the mm's single
      page_table_lock.  (But even then, page_table_lock is still used to guard page
      table allocation, and anon_vma allocation.)
      
      In this implementation, the spinlock is tucked inside the struct page of the
      page table page: with a BUILD_BUG_ON in case it overflows - which it would in
      the case of 32-bit PA-RISC with spinlock debugging enabled.
      
      Splitting the lock is not quite for free: another cacheline access.  Ideally,
      I suppose we would use split ptlock only for multi-threaded processes on
      multi-cpu machines; but deciding that dynamically would have its own costs.
      So for now enable it by config, at some number of cpus - since the Kconfig
      language doesn't support inequalities, let preprocessor compare that with
      NR_CPUS.  But I don't think it's worth being user-configurable: for good
      testing of both split and unsplit configs, split now at 4 cpus, and perhaps
      change that to 8 later.
      
      There is a benefit even for singly threaded processes: kswapd can be attacking
      one part of the mm while another part is busy faulting.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4c21e2f2
    • Hugh Dickins's avatar
      [PATCH] mm: uml kill unused · b38c6845
      Hugh Dickins authored
      In worrying over the various pte operations in different architectures, I came
      across some unused functions in UML: remove mprotect_kernel_vm,
      protect_vm_page and addr_pte.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b38c6845
    • Hugh Dickins's avatar
      [PATCH] mm: uml pte atomicity · 8f5cd76c
      Hugh Dickins authored
      There's usually a good reason when a pte is examined without the lock; but it
      makes me nervous when the pointer is dereferenced more than once.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8f5cd76c
    • Hugh Dickins's avatar
      [PATCH] mm: cris v32 mmu_context_lock · a7e4705b
      Hugh Dickins authored
      The cris v32 switch_mm guards get_mmu_context with next->page_table_lock: good
      it's not really SMP yet, since get_mmu_context messes with global variables
      affecting other mms.  Replace by global mmu_context_lock.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      a7e4705b
    • Hugh Dickins's avatar
      [PATCH] mm: parisc pte atomicity · 92dc6fcc
      Hugh Dickins authored
      There's a worrying function translation_exists in parisc cacheflush.h,
      unaffected by split ptlock since flush_dcache_page is using it on some other
      mm, without any relevant lock.  Oh well, make it a slightly more robust by
      factoring the pfn check within it.  And it looked liable to confuse a
      camouflaged swap or file entry with a good pte: fix that too.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      92dc6fcc
    • Hugh Dickins's avatar
      [PATCH] mm: arm ready for split ptlock · 69b04754
      Hugh Dickins authored
      Prepare arm for the split page_table_lock: three issues.
      
      Signal handling's preserve and restore of iwmmxt context currently involves
      reading and writing that context to and from user space, while holding
      page_table_lock to secure the user page(s) against kswapd.  If we split the
      lock, then the structure might span two pages, secured by to read into and
      write from a kernel stack buffer, copying that out and in without locking (the
      structure is 160 bytes in size, and here we're near the top of the kernel
      stack).  Or would the overhead be noticeable?
      
      arm_syscall's cmpxchg emulation use pte_offset_map_lock, instead of
      pte_offset_map and mm-wide page_table_lock; and strictly, it should now also
      take mmap_sem before descending to pmd, to guard against another thread
      munmapping, and the page table pulled out beneath this thread.
      
      Updated two comments in fault-armv.c.  adjust_pte is interesting, since its
      modification of a pte in one part of the mm depends on the lock held when
      calling update_mmu_cache for a pte in some other part of that mm.  This can't
      be done with a split page_table_lock (and we've already taken the lowest lock
      in the hierarchy here): so we'll have to disable split on arm, unless
      CONFIG_CPU_CACHE_VIPT to ensures adjust_pte never used.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      69b04754
    • Hugh Dickins's avatar
      [PATCH] mm: i386 sh sh64 ready for split ptlock · 60ec5585
      Hugh Dickins authored
      Use pte_offset_map_lock, instead of pte_offset_map (or inappropriate
      pte_offset_kernel) and mm-wide page_table_lock, in sundry arch places.
      
      The i386 vm86 mark_screen_rdonly: yes, there was and is an assumption that the
      screen fits inside the one page table, as indeed it does.
      
      The sh __do_page_fault: which handles both kernel faults (without lock) and
      user mm faults (locked - though it set_pte without locking before).
      
      The sh64 flush_cache_range and helpers: which wrongly thought callers held
      page_table_lock before (only its tlb_start_vma did, and no longer does so);
      moved the flush loop down, and adjusted the large versus small range decision
      to consider a range which spans page tables as large.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Acked-by: default avatarPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      60ec5585
    • Hugh Dickins's avatar
      [PATCH] mm: follow_page with inner ptlock · deceb6cd
      Hugh Dickins authored
      Final step in pushing down common core's page_table_lock.  follow_page no
      longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself;
      and so no page_table_lock is taken in get_user_pages itself.
      
      But get_user_pages (and get_futex_key) do then need follow_page to pin the
      page for them: take Daniel's suggestion of bitflags to follow_page.
      
      Need one for WRITE, another for TOUCH (it was the accessed flag before:
      vanished along with check_user_page_readable, but surely get_numa_maps is
      wrong to mark every page it finds as accessed), another for GET.
      
      And another, ANON to dispose of untouched_anonymous_page: it seems silly for
      that to descend a second time, let follow_page observe if there was no page
      table and return ZERO_PAGE if so.  Fix minor bug in that: check VM_LOCKED -
      make_pages_present ought to make readonly anonymous present.
      
      Give get_numa_maps a cond_resched while we're there.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      deceb6cd
    • Hugh Dickins's avatar
      [PATCH] mm: kill check_user_page_readable · c34d1b4d
      Hugh Dickins authored
      check_user_page_readable is a problematic variant of follow_page.  It's used
      only by oprofile's i386 and arm backtrace code, at interrupt time, to
      establish whether a userspace stackframe is currently readable.
      
      This is problematic, because we want to push the page_table_lock down inside
      follow_page, and later split it; whereas oprofile is doing a spin_trylock on
      it (in the i386 case, forgotten in the arm case), and needs that to pin
      perhaps two pages spanned by the stackframe (which might be covered by
      different locks when we split).
      
      I think oprofile is going about this in the wrong way: it doesn't need to know
      the area is readable (neither i386 nor arm uses read protection of user
      pages), it doesn't need to pin the memory, it should simply
      __copy_from_user_inatomic, and see if that succeeds or not.  Sorry, but I've
      not got around to devising the sparse __user annotations for this.
      
      Then we can eliminate check_user_page_readable, and return to a single
      follow_page without the __follow_page variants.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      c34d1b4d
    • Hugh Dickins's avatar
      [PATCH] mm: rmap with inner ptlock · c0718806
      Hugh Dickins authored
      rmap's page_check_address descend without page_table_lock.  First just
      pte_offset_map in case there's no pte present worth locking for, then take
      page_table_lock for the full check, and pass ptl back to caller in the same
      style as pte_offset_map_lock.  __xip_unmap, page_referenced_one and
      try_to_unmap_one use pte_unmap_unlock.  try_to_unmap_cluster also.
      
      page_check_address reformatted to avoid progressive indentation.  No use is
      made of its one error code, return NULL when it fails.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      c0718806
    • Hugh Dickins's avatar
      [PATCH] mm: xip_unmap ZERO_PAGE fix · 67b02f11
      Hugh Dickins authored
      Small fix to the PageReserved patch: the mips ZERO_PAGE(address) depends on
      address, so __xip_unmap is wrong to initialize page with that before address
      is initialized; and in fact must re-evaluate it each iteration.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      67b02f11
    • Hugh Dickins's avatar
      [PATCH] mm: unmap_vmas with inner ptlock · 508034a3
      Hugh Dickins authored
      Remove the page_table_lock from around the calls to unmap_vmas, and replace
      the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are
      now safe to descend without page_table_lock.
      
      Don't attempt fancy locking for hugepages, just take page_table_lock in
      unmap_hugepage_range.  Which makes zap_hugepage_range, and the hugetlb test in
      zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway.  Nor
      does unmap_vmas have much use for its mm arg now.
      
      The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without
      page_table_lock: if they're implemented at all, they typically come down to
      flush_cache_range (usually done outside page_table_lock) and flush_tlb_range
      (which we already audited for the mprotect case).
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      508034a3
    • Hugh Dickins's avatar
      [PATCH] mm: unlink vma before pagetables · 8f4f8c16
      Hugh Dickins authored
      In most places the descent from pgd to pud to pmd to pte holds mmap_sem
      (exclusively or not), which ensures that free_pgtables cannot be freeing page
      tables from any level at the same time.  But truncation and reverse mapping
      descend without mmap_sem.
      
      No problem: just make sure that a vma is unlinked from its prio_tree (or
      nonlinear list) and from its anon_vma list, after zapping the vma, but before
      freeing its page tables.  Then neither vmtruncate nor rmap can reach that vma
      whose page tables are now volatile (nor do they need to reach it, since all
      its page entries have been zapped by this stage).
      
      The i_mmap_lock and anon_vma->lock already serialize this correctly; but the
      locking hierarchy is such that we cannot take them while holding
      page_table_lock.  Well, we're trying to push that down anyway.  So in this
      patch, move anon_vma_unlink and unlink_file_vma into free_pgtables, at the
      same time as moving page_table_lock around calls to unmap_vmas.
      
      tlb_gather_mmu and tlb_finish_mmu then fall outside the page_table_lock, but
      we made them preempt_disable and preempt_enable earlier; and a long source
      audit of all the architectures has shown no problem with removing
      page_table_lock from them.  free_pgtables doesn't need page_table_lock for
      itself, nor for what it calls; tlb->mm->nr_ptes is usually protected by
      page_table_lock, but partly by non-exclusive mmap_sem - here it's decremented
      with exclusive mmap_sem, or mm_users 0.  update_hiwater_rss and
      vm_unacct_memory don't need page_table_lock either.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8f4f8c16
    • Hugh Dickins's avatar
      [PATCH] mm: flush_tlb_range outside ptlock · 663b97f7
      Hugh Dickins authored
      There was one small but very significant change in the previous patch:
      mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4,
      but that doesn't prove it safe in 2.6.
      
      On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which
      has always been called from outside page_table_lock in dup_mmap, and is so
      proved safe.  Others required a deeper audit: I could find no reliance on
      page_table_lock in any; but in ia64 and parisc found some code which looks a
      bit as if it might want preemption disabled.  That won't do any actual harm,
      so pending a decision from the maintainers, disable preemption there.
      
      Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and
      flush_tlb_page entries in cachetlb.txt: they were rather misleading (what
      generic code does is different from what usually happens), the rules are now
      changing, and it's not yet clear where we'll end up (will the generic
      tlb_flush_mmu happen always under lock?  never under lock?  or sometimes under
      and sometimes not?).
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      663b97f7
    • Hugh Dickins's avatar
      [PATCH] mm: pte_offset_map_lock loops · 705e87c0
      Hugh Dickins authored
      Convert those common loops using page_table_lock on the outside and
      pte_offset_map within to use just pte_offset_map_lock within instead.
      
      These all hold mmap_sem (some exclusively, some not), so at no level can a
      page table be whipped away from beneath them.  But whereas pte_alloc loops
      tested with the "atomic" pmd_present, these loops are testing with pmd_none,
      which on i386 PAE tests both lower and upper halves.
      
      That's now unsafe, so add a cast into pmd_none to test only the vital lower
      half: we lose a little sensitivity to a corrupt middle directory, but not
      enough to worry about.  It appears that i386 and UML were the only
      architectures vulnerable in this way, and pgd and pud no problem.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      705e87c0
    • Hugh Dickins's avatar
      [PATCH] mm: page fault handler locking · 8f4e2101
      Hugh Dickins authored
      On the page fault path, the patch before last pushed acquiring the
      page_table_lock down to the head of handle_pte_fault (though it's also taken
      and dropped earlier when a new page table has to be allocated).
      
      Now delete that line, read "entry = *pte" without it, and go off to this or
      that page fault handler on the basis of this unlocked peek.  Usually the
      handler can proceed without the lock, relying on the subsequent locked
      pte_same or pte_none test to back out when necessary; though do_wp_page needs
      the lock immediately, and do_file_page doesn't check (if there's a race,
      install_page just zaps the entry and reinstalls it).
      
      But on those architectures (notably i386 with PAE) whose pte is too big to be
      read atomically, if SMP or preemption is enabled, do_swap_page and
      do_file_page might cause irretrievable damage if passed a Frankenstein entry
      stitched together from unrelated parts.  In those configs, "pte_unmap_same"
      has to take page_table_lock, validate orig_pte still the same, and drop
      page_table_lock before unmapping, before proceeding.
      
      Use pte_offset_map_lock and pte_unmap_unlock throughout the handlers; but lock
      avoidance leaves more lone maps and unmaps than elsewhere.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8f4e2101
    • Hugh Dickins's avatar
      [PATCH] mm: arches skip ptlock · b462705a
      Hugh Dickins authored
      Convert those few architectures which are calling pud_alloc, pmd_alloc,
      pte_alloc_map on a user mm, not to take the page_table_lock first, nor drop it
      after.  Each of these can continue to use pte_alloc_map, no need to change
      over to pte_alloc_map_lock, they're neither racy nor swappable.
      
      In the sparc64 io_remap_pfn_range, flush_tlb_range then falls outside of the
      page_table_lock: that's okay, on sparc64 it's like flush_tlb_mm, and that has
      always been called from outside of page_table_lock in dup_mmap.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b462705a
    • Hugh Dickins's avatar
      [PATCH] mm: ptd_alloc take ptlock · c74df32c
      Hugh Dickins authored
      Second step in pushing down the page_table_lock.  Remove the temporary
      bridging hack from __pud_alloc, __pmd_alloc, __pte_alloc: expect callers not
      to hold page_table_lock, whether it's on init_mm or a user mm; take
      page_table_lock internally to check if a racing task already allocated.
      
      Convert their callers from common code.  But avoid coming back to change them
      again later: instead of moving the spin_lock(&mm->page_table_lock) down,
      switch over to new macros pte_alloc_map_lock and pte_unmap_unlock, which
      encapsulate the mapping+locking and unlocking+unmapping together, and in the
      end may use alternatives to the mm page_table_lock itself.
      
      These callers all hold mmap_sem (some exclusively, some not), so at no level
      can a page table be whipped away from beneath them; and pte_alloc uses the
      "atomic" pmd_present to test whether it needs to allocate.  It appears that on
      all arches we can safely descend without page_table_lock.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      c74df32c
    • Hugh Dickins's avatar
      [PATCH] mm: ptd_alloc inline and out · 1bb3630e
      Hugh Dickins authored
      It seems odd to me that, whereas pud_alloc and pmd_alloc test inline, only
      calling out-of-line __pud_alloc __pmd_alloc if allocation needed,
      pte_alloc_map and pte_alloc_kernel are entirely out-of-line.  Though it does
      add a little to kernel size, change them to macros testing inline, calling
      __pte_alloc or __pte_alloc_kernel to allocate out-of-line.  Mark none of them
      as fastcalls, leave that to CONFIG_REGPARM or not.
      
      It also seems more natural for the out-of-line functions to leave the offset
      calculation and map to the inline, which has to do it anyway for the common
      case.  At least mremap move wants __pte_alloc without _map.
      
      Macros rather than inline functions, certainly to avoid the header file issues
      which arise from CONFIG_HIGHPTE needing kmap_types.h, but also in case any
      architectures I haven't built would have other such problems.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      1bb3630e
    • Hugh Dickins's avatar
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins authored
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      872fec16
    • Hugh Dickins's avatar
      [PATCH] mm: ia64 use expand_upwards · 46dea3d0
      Hugh Dickins authored
      ia64 has expand_backing_store function for growing its Register Backing Store
      vma upwards.  But more complete code for this purpose is found in the
      CONFIG_STACK_GROWSUP part of mm/mmap.c.  Uglify its #ifdefs further to provide
      expand_upwards for ia64 as well as expand_stack for parisc.
      
      The Register Backing Store vma should be marked VM_ACCOUNT.  Implement the
      intention of growing it only a page at a time, instead of passing an address
      outside of the vma to handle_mm_fault, with unknown consequences.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      46dea3d0
    • Hugh Dickins's avatar
      [PATCH] mm: mm_struct hiwaters moved · f449952b
      Hugh Dickins authored
      Slight and timid rearrangement of mm_struct: hiwater_rss and hiwater_vm were
      tacked on the end, but it seems better to keep them near _file_rss, _anon_rss
      and total_vm, in the same cacheline on those arches verified.
      
      There are likely to be more profitable rearrangements, but less obvious (is it
      good or bad that saved_auxv[AT_VECTOR_SIZE] isolates cpu_vm_mask and context
      from many others?), needing serious instrumentation.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      f449952b
    • Hugh Dickins's avatar
      [PATCH] mm: update_hiwaters just in time · 365e9c87
      Hugh Dickins authored
      update_mem_hiwater has attracted various criticisms, in particular from those
      concerned with mm scalability.  Originally it was called whenever rss or
      total_vm got raised.  Then many of those callsites were replaced by a timer
      tick call from account_system_time.  Now Frank van Maarseveen reports that to
      be found inadequate.  How about this?  Works for Frank.
      
      Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros
      update_hiwater_rss and update_hiwater_vm.  Don't attempt to keep
      mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually
      by 1): those are hot paths.  Do the opposite, update only when about to lower
      rss (usually by many), or just before final accounting in do_exit.  Handle
      mm->hiwater_vm in the same way, though it's much less of an issue.  Demand
      that whoever collects these hiwater statistics do the work of taking the
      maximum with rss or total_vm.
      
      And there has been no collector of these hiwater statistics in the tree.  The
      new convention needs an example, so match Frank's usage by adding a VmPeak
      line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS
      (High-Water-Mark or High-Water-Memory).
      
      There was a particular anomaly during mremap move, that hiwater_vm might be
      captured too high.  A fleeting such anomaly remains, but it's quickly
      corrected now, whereas before it would stick.
      
      What locking?  None: if the app is racy then these statistics will be racy,
      it's not worth any overhead to make them exact.  But whenever it suits,
      hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under
      page_table_lock (for now) or with preemption disabled (later on): without
      going to any trouble, minimize the time between reading current values and
      updating, to minimize those occasions when a racing thread bumps a count up
      and back down in between.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      365e9c87
    • Hugh Dickins's avatar
      [PATCH] mm: zap_pte out of line · 861f2fb8
      Hugh Dickins authored
      There used to be just one call to zap_pte, but it shouldn't be inline now
      there are two.  Check for the common case pte_none before calling, and move
      its rss accounting up into install_page or install_file_pte - which helps the
      next patch.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      861f2fb8
    • Hugh Dickins's avatar
      [PATCH] mm: do_mremap current mm · d0de32d9
      Hugh Dickins authored
      Cleanup: relieve do_mremap from its surfeit of current->mms.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      d0de32d9
    • Hugh Dickins's avatar
      [PATCH] mm: do_swap_page race major · 9e9bef07
      Hugh Dickins authored
      Small adjustment: do_swap_page should report its !pte_same race as a major
      fault if it had to read into swap cache, because whatever raced with it will
      have found page already in cache and reported minor fault.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      9e9bef07
    • Hugh Dickins's avatar
      [PATCH] mm: zap_pte_range dec rss · 86d912f4
      Hugh Dickins authored
      Small adjustment: zap_pte_range decrement its rss counts from 0 then finally
      add, avoiding negations - we don't have or need a sub_mm_rss.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      86d912f4
    • Hugh Dickins's avatar
      [PATCH] mm: copy_one_pte inc rss · 8c103762
      Hugh Dickins authored
      Small adjustment, following Nick's suggestion: it's more straightforward for
      copy_pte_range to let copy_one_pte do the rss incrementation, than use an
      index it passed back.  Saves a #define, and 16 bytes of .text.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      8c103762
    • Nick Piggin's avatar
      [PATCH] core remove PageReserved · b5810039
      Nick Piggin authored
      Remove PageReserved() calls from core code by tightening VM_RESERVED
      handling in mm/ to cover PageReserved functionality.
      
      PageReserved special casing is removed from get_page and put_page.
      
      All setting and clearing of PageReserved is retained, and it is now flagged
      in the page_alloc checks to help ensure we don't introduce any refcount
      based freeing of Reserved pages.
      
      MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
      deprecated.  We never completely handled it correctly anyway, and is be
      reintroduced in future if required (Hugh has a proof of concept).
      
      Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
      arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
      be trivially removed.
      
      Last real user of PageReserved is swsusp, which uses PageReserved to
      determine whether a struct page points to valid memory or not.  This still
      needs to be addressed (a generic page_is_ram() should work).
      
      A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
      thus mapcounted and count towards shared rss).  These writes to the struct
      page could cause excessive cacheline bouncing on big systems.  There are a
      number of ways this could be addressed if it is an issue.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      
      Refcount bug fix for filemap_xip.c
      Signed-off-by: default avatarCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      b5810039
    • Hugh Dickins's avatar
      [PATCH] mm: m68k kill stram swap · f9c98d02
      Hugh Dickins authored
      Please, please now delete the Atari CONFIG_STRAM_SWAP code.  It may be
      excellent and ingenious code, but its reference to swap_vfsmnt betrays that it
      hasn't been built since 2.5.1 (four years old come December), it's delving
      deep into matters which are the preserve of core mm code, its only purpose is
      to give the more conscientious mm guys an anxiety attack from time to time;
      yet we keep on breaking it more and more.
      
      If you want to use RAM for swap, then if the MTD driver does not already
      provide just what you need, I'm sure David could be persuaded to add the
      extra.  But you'd also like to be able to allocate extents of that swap for
      other use: we can give you a core interface for that if you need.  But unbuilt
      for four years suggests to me that there's no need at all.
      
      I cannot swear the patch below won't break your build, but believe so.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      f9c98d02
    • Hugh Dickins's avatar
      [PATCH] mm: sh64 hugetlbpage.c · 147efea8
      Hugh Dickins authored
      The sh64 hugetlbpage.c seems to be erroneous, left over from a bygone age,
      clashing with the common hugetlb.c.  Replace it by a copy of the sh
      hugetlbpage.c.  Except, delete that mk_pte_huge macro neither uses.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Acked-by: default avatarPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      147efea8
    • Hugh Dickins's avatar
      [PATCH] mm: dup_mmap down new mmap_sem · 7ee78232
      Hugh Dickins authored
      One anomaly remains from when Andrea rationalized the responsibilities of
      mmap_sem and page_table_lock: in dup_mmap we add vmas to the child holding its
      page_table_lock, but not the mmap_sem which normally guards the vma list and
      rbtree.  Which could be an issue for unuse_mm: though since it just walks down
      the list (today with page_table_lock, tomorrow not), it's probably okay.  Will
      need a memory barrier?  Oh, keep it simple, Nick and I agreed, no harm in
      taking child's mmap_sem here.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      7ee78232
    • Hugh Dickins's avatar
      [PATCH] mm: dup_mmap use oldmm more · fd3e42fc
      Hugh Dickins authored
      Use the parent's oldmm throughout dup_mmap, instead of perversely going back
      to current->mm.  (Can you hear the sigh of relief from those mpnts?  Usually I
      squash them, but not today.)
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fd3e42fc
    • Hugh Dickins's avatar
      [PATCH] mm: batch updating mm_counters · ae859762
      Hugh Dickins authored
      tlb_finish_mmu used to batch zap_pte_range's update of mm rss, which may be
      worthwhile if the mm is contended, and would reduce atomic operations if the
      counts were atomic.  Let zap_pte_range now batch its updates to file_rss and
      anon_rss, per page-table in case we drop the lock outside; and copy_pte_range
      batch them too.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ae859762
    • Hugh Dickins's avatar
      [PATCH] mm: rss = file_rss + anon_rss · 4294621f
      Hugh Dickins authored
      I was lazy when we added anon_rss, and chose to change as few places as
      possible.  So currently each anonymous page has to be counted twice, in rss
      and in anon_rss.  Which won't be so good if those are atomic counts in some
      configurations.
      
      Change that around: keep file_rss and anon_rss separately, and add them
      together (with get_mm_rss macro) when the total is needed - reading two
      atomics is much cheaper than updating two atomics.  And update anon_rss
      upfront, typically in memory.c, not tucked away in page_add_anon_rmap.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4294621f
    • Hugh Dickins's avatar
      [PATCH] mm: mm_init set_mm_counters · 404351e6
      Hugh Dickins authored
      How is anon_rss initialized?  In dup_mmap, and by mm_alloc's memset; but
      that's not so good if an mm_counter_t is a special type.  And how is rss
      initialized?  By set_mm_counter, all over the place.  Come on, we just need to
      initialize them both at once by set_mm_counter in mm_init (which follows the
      memcpy when forking).
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      404351e6
    • Hugh Dickins's avatar
      [PATCH] mm: tlb_finish_mmu forget rss · fc2acab3
      Hugh Dickins authored
      zap_pte_range has been counting the pages it frees in tlb->freed, then
      tlb_finish_mmu has used that to update the mm's rss.  That got stranger when I
      added anon_rss, yet updated it by a different route; and stranger when rss and
      anon_rss became mm_counters with special access macros.  And it would no
      longer be viable if we're relying on page_table_lock to stabilize the
      mm_counter, but calling tlb_finish_mmu outside that lock.
      
      Remove the mmu_gather's freed field, let tlb_finish_mmu stick to its own
      business, just decrement the rss mm_counter in zap_pte_range (yes, there was
      some point to batching the update, and a subsequent patch restores that).  And
      forget the anal paranoia of first reading the counter to avoid going negative
      - if rss does go negative, just fix that bug.
      
      Remove the mmu_gather's flushes and avoided_flushes from arm and arm26: no use
      was being made of them.  But arm26 alone was actually using the freed, in the
      way some others use need_flush: give it a need_flush.  arm26 seems to prefer
      spaces to tabs here: respect that.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      fc2acab3