1. 03 Jul, 2013 40 commits
    • Chen Gang's avatar
      mm/page_alloc.c: add additional checking and return value for the 'table->data' · dacbde09
      Chen Gang authored
      - check the length of the procfs data before copying it into a fixed
        size array.
      
      - when __parse_numa_zonelist_order() fails, save the error code for
        return.
      
      - 'char*' --> 'char *' coding style fix
      Signed-off-by: default avatarChen Gang <gang.chen@asianux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dacbde09
    • Mel Gorman's avatar
      mm: remove lru parameter from __lru_cache_add and lru_cache_add_lru · c53954a0
      Mel Gorman authored
      Similar to __pagevec_lru_add, this patch removes the LRU parameter from
      __lru_cache_add and lru_cache_add_lru as the caller does not control the
      exact LRU the page gets added to.  lru_cache_add_lru gets renamed to
      lru_cache_add the name is silly without the lru parameter.  With the
      parameter removed, it is required that the caller indicate if they want
      the page added to the active or inactive list by setting or clearing
      PageActive respectively.
      
      [akpm@linux-foundation.org: Suggested the patch]
      [gang.chen@asianux.com: fix used-unintialized warning]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarChen Gang <gang.chen@asianux.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Alexey Lyahkov <alexey.lyashkov@gmail.com>
      Cc: Andrew Perepechko <anserper@ya.ru>
      Cc: Robin Dong <sanbai@taobao.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Bernd Schubert <bernd.schubert@fastmail.fm>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c53954a0
    • Mel Gorman's avatar
      mm: remove lru parameter from __pagevec_lru_add and remove parts of pagevec API · a0b8cab3
      Mel Gorman authored
      Now that the LRU to add a page to is decided at LRU-add time, remove the
      misleading lru parameter from __pagevec_lru_add.  A consequence of this
      is that the pagevec_lru_add_file, pagevec_lru_add_anon and similar
      helpers are misleading as the caller no longer has direct control over
      what LRU the page is added to.  Unused helpers are removed by this patch
      and existing users of pagevec_lru_add_file() are converted to use
      lru_cache_add_file() directly and use the per-cpu pagevecs instead of
      creating their own pagevec.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Alexey Lyahkov <alexey.lyashkov@gmail.com>
      Cc: Andrew Perepechko <anserper@ya.ru>
      Cc: Robin Dong <sanbai@taobao.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Bernd Schubert <bernd.schubert@fastmail.fm>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a0b8cab3
    • Mel Gorman's avatar
      mm: activate !PageLRU pages on mark_page_accessed if page is on local pagevec · 059285a2
      Mel Gorman authored
      If a page is on a pagevec then it is !PageLRU and mark_page_accessed()
      may fail to move a page to the active list as expected.  Now that the
      LRU is selected at LRU drain time, mark pages PageActive if they are on
      the local pagevec so it gets moved to the correct list at LRU drain
      time.  Using a debugging patch it was found that for a simple git
      checkout based workload that pages were never added to the active file
      list in practice but with this patch applied they are.
      
      				before   after
      LRU Add Active File                  0      750583
      LRU Add Active Anon            2640587     2702818
      LRU Add Inactive File          8833662     8068353
      LRU Add Inactive Anon              207         200
      
      Note that only pages on the local pagevec are considered on purpose.  A
      !PageLRU page could be in the process of being released, reclaimed,
      migrated or on a remote pagevec that is currently being drained.
      Marking it PageActive is vunerable to races where PageLRU and Active
      bits are checked at the wrong time.  Page reclaim will trigger
      VM_BUG_ONs but depending on when the race hits, it could also free a
      PageActive page to the page allocator and trigger a bad_page warning.
      Similarly a potential race exists between a per-cpu drain on a pagevec
      list and an activation on a remote CPU.
      
      				lru_add_drain_cpu
      				__pagevec_lru_add
      				  lru = page_lru(page);
      mark_page_accessed
        if (PageLRU(page))
          activate_page
        else
          SetPageActive
      				  SetPageLRU(page);
      				  add_page_to_lru_list(page, lruvec, lru);
      
      In this case a PageActive page is added to the inactivate list and later
      the inactive/active stats will get skewed.  While the PageActive checks
      in vmscan could be removed and potentially dealt with, a skew in the
      statistics would be very difficult to detect.  Hence this patch deals
      just with the common case where a page being marked accessed has just
      been added to the local pagevec.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Alexey Lyahkov <alexey.lyashkov@gmail.com>
      Cc: Andrew Perepechko <anserper@ya.ru>
      Cc: Robin Dong <sanbai@taobao.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Bernd Schubert <bernd.schubert@fastmail.fm>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      059285a2
    • Mel Gorman's avatar
      mm: pagevec: defer deciding which LRU to add a page to until pagevec drain time · 13f7f789
      Mel Gorman authored
      mark_page_accessed() cannot activate an inactive page that is located on
      an inactive LRU pagevec.  Hints from filesystems may be ignored as a
      result.  In preparation for fixing that problem, this patch removes the
      per-LRU pagevecs and leaves just one pagevec.  The final LRU the page is
      added to is deferred until the pagevec is drained.
      
      This means that fewer pagevecs are available and potentially there is
      greater contention on the LRU lock.  However, this only applies in the
      case where there is an almost perfect mix of file, anon, active and
      inactive pages being added to the LRU.  In practice I expect that we are
      adding stream of pages of a particular time and that the changes in
      contention will barely be measurable.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Alexey Lyahkov <alexey.lyashkov@gmail.com>
      Cc: Andrew Perepechko <anserper@ya.ru>
      Cc: Robin Dong <sanbai@taobao.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Bernd Schubert <bernd.schubert@fastmail.fm>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      13f7f789
    • Mel Gorman's avatar
      mm: add tracepoints for LRU activation and insertions · c6286c98
      Mel Gorman authored
      Andrew Perepechko reported a problem whereby pages are being prematurely
      evicted as the mark_page_accessed() hint is ignored for pages that are
      currently on a pagevec --
      http://www.spinics.net/lists/linux-ext4/msg37340.html .
      
      Alexey Lyahkov and Robin Dong have also reported problems recently that
      could be due to hot pages reaching the end of the inactive list too
      quickly and be reclaimed.
      
      Rather than addressing this on a per-filesystem basis, this series aims
      to fix the mark_page_accessed() interface by deferring what LRU a page
      is added to pagevec drain time and allowing mark_page_accessed() to call
      SetPageActive on a pagevec page.
      
      Patch 1 adds two tracepoints for LRU page activation and insertion. Using
      	these processes it's possible to build a model of pages in the
      	LRU that can be processed offline.
      
      Patch 2 defers making the decision on what LRU to add a page to until when
      	the pagevec is drained.
      
      Patch 3 searches the local pagevec for pages to mark PageActive on
      	mark_page_accessed. The changelog explains why only the local
      	pagevec is examined.
      
      Patches 4 and 5 tidy up the API.
      
      postmark, a dd-based test and fs-mark both single and threaded mode were
      run but none of them showed any performance degradation or gain as a
      result of the patch.
      
      Using patch 1, I built a *very* basic model of the LRU to examine
      offline what the average age of different page types on the LRU were in
      milliseconds.  Of course, capturing the trace distorts the test as it's
      written to local disk but it does not matter for the purposes of this
      test.  The average age of pages in milliseconds were
      
      				    vanilla deferdrain
      Average age mapped anon:               1454       1250
      Average age mapped file:             127841     155552
      Average age unmapped anon:               85        235
      Average age unmapped file:            73633      38884
      Average age unmapped buffers:         74054     116155
      
      The LRU activity was mostly files which you'd expect for a dd-based
      workload.  Note that the average age of buffer pages is increased by the
      series and it is expected this is due to the fact that the buffer pages
      are now getting added to the active list when drained from the pagevecs.
      Note that the average age of the unmapped file data is decreased as they
      are still added to the inactive list and are reclaimed before the
      buffers.
      
      There is no guarantee this is a universal win for all workloads and it
      would be nice if the filesystem people gave some thought as to whether
      this decision is generally a win or a loss.
      
      This patch:
      
      Using these tracepoints it is possible to model LRU activity and the
      average residency of pages of different types.  This can be used to
      debug problems related to premature reclaim of pages of particular
      types.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Alexey Lyahkov <alexey.lyashkov@gmail.com>
      Cc: Andrew Perepechko <anserper@ya.ru>
      Cc: Robin Dong <sanbai@taobao.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Bernd Schubert <bernd.schubert@fastmail.fm>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c6286c98
    • Li Zefan's avatar
      memcg: update TODO list in Documentation · f968ef1c
      Li Zefan authored
      hugetlb cgroup has already been implemented.
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarRob Landley <rob@landley.net>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f968ef1c
    • HATAYAMA Daisuke's avatar
      vmcore: support mmap() on /proc/vmcore · 83086978
      HATAYAMA Daisuke authored
      This patch introduces mmap_vmcore().
      
      Don't permit writable nor executable mapping even with mprotect()
      because this mmap() is aimed at reading crash dump memory.  Non-writable
      mapping is also requirement of remap_pfn_range() when mapping linear
      pages on non-consecutive physical pages; see is_cow_mapping().
      
      Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by
      remap_vmalloc_range_pertial at the same time for a single vma.
      do_munmap() can correctly clean partially remapped vma with two
      functions in abnormal case.  See zap_pte_range(), vm_normal_page() and
      their comments for details.
      
      On x86-32 PAE kernels, mmap() supports at most 16TB memory only.  This
      limitation comes from the fact that the third argument of
      remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.
      
      [akpm@linux-foundation.org: use min(), switch to conventional error-unwinding approach]
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Tested-by: default avatarMaxim Uvarov <muvarov@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      83086978
    • HATAYAMA Daisuke's avatar
      vmcore: calculate vmcore file size from buffer size and total size of vmcore objects · 591ff716
      HATAYAMA Daisuke authored
      The previous patches newly added holes before each chunk of memory and
      the holes need to be count in vmcore file size.  There are two ways to
      count file size in such a way:
      
      1) suppose m is a poitner to the last vmcore object in vmcore_list.
         Then file size is (m->offset + m->size), or
      
      2) calculate sum of size of buffers for ELF header, program headers,
         ELF note segments and objects in vmcore_list.
      
      Although 1) is more direct and simpler than 2), 2) seems better in that
      it reflects internal object structure of /proc/vmcore.  Thus, this patch
      changes get_vmcore_size_elf{64, 32} so that it calculates size in the
      way of 2).
      
      As a result, both get_vmcore_size_elf{64, 32} have the same definition.
      Merge them as get_vmcore_size.
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      591ff716
    • HATAYAMA Daisuke's avatar
      vmcore: allow user process to remap ELF note segment buffer · ef9e78fd
      HATAYAMA Daisuke authored
      Now ELF note segment has been copied in the buffer on vmalloc memory.
      To allow user process to remap the ELF note segment buffer with
      remap_vmalloc_page, the corresponding VM area object has to have
      VM_USERMAP flag set.
      
      [akpm@linux-foundation.org: use the conventional comment layout]
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ef9e78fd
    • HATAYAMA Daisuke's avatar
      vmcore: allocate ELF note segment in the 2nd kernel vmalloc memory · 087350c9
      HATAYAMA Daisuke authored
      The reasons why we don't allocate ELF note segment in the 1st kernel
      (old memory) on page boundary is to keep backward compatibility for old
      kernels, and that if doing so, we waste not a little memory due to
      round-up operation to fit the memory to page boundary since most of the
      buffers are in per-cpu area.
      
      ELF notes are per-cpu, so total size of ELF note segments depends on
      number of CPUs.  The current maximum number of CPUs on x86_64 is 5192,
      and there's already system with 4192 CPUs in SGI, where total size
      amounts to 1MB.  This can be larger in the near future or possibly even
      now on another architecture that has larger size of note per a single
      cpu.  Thus, to avoid the case where memory allocation for large block
      fails, we allocate vmcore objects on vmalloc memory.
      
      This patch adds elfnotes_buf and elfnotes_sz variables to keep pointer
      to the ELF note segment buffer and its size.  There's no longer the
      vmcore object that corresponds to the ELF note segment in vmcore_list.
      Accordingly, read_vmcore() has new case for ELF note segment and
      set_vmcore_list_offsets_elf{64,32}() and other helper functions starts
      calculating offset from sum of size of ELF headers and size of ELF note
      segment.
      
      [akpm@linux-foundation.org: use min(), fix error-path vzalloc() leaks]
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      087350c9
    • HATAYAMA Daisuke's avatar
      vmalloc: introduce remap_vmalloc_range_partial · e69e9d4a
      HATAYAMA Daisuke authored
      We want to allocate ELF note segment buffer on the 2nd kernel in vmalloc
      space and remap it to user-space in order to reduce the risk that memory
      allocation fails on system with huge number of CPUs and so with huge ELF
      note segment that exceeds 11-order block size.
      
      Although there's already remap_vmalloc_range for the purpose of
      remapping vmalloc memory to user-space, we need to specify user-space
      range via vma.
       Mmap on /proc/vmcore needs to remap range across multiple objects, so
      the interface that requires vma to cover full range is problematic.
      
      This patch introduces remap_vmalloc_range_partial that receives user-space
      range as a pair of base address and size and can be used for mmap on
      /proc/vmcore case.
      
      remap_vmalloc_range is rewritten using remap_vmalloc_range_partial.
      
      [akpm@linux-foundation.org: use PAGE_ALIGNED()]
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e69e9d4a
    • HATAYAMA Daisuke's avatar
      vmalloc: make find_vm_area check in range · cef2ac3f
      HATAYAMA Daisuke authored
      Currently, __find_vmap_area searches for the kernel VM area starting at
      a given address.  This patch changes this behavior so that it searches
      for the kernel VM area to which the address belongs.  This change is
      needed by remap_vmalloc_range_partial to be introduced in later patch
      that receives any position of kernel VM area as target address.
      
      This patch changes the condition (addr > va->va_start) to the equivalent
      (addr >= va->va_end) by taking advantage of the fact that each kernel VM
      area is non-overlapping.
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cef2ac3f
    • HATAYAMA Daisuke's avatar
      vmcore: treat memory chunks referenced by PT_LOAD program header entries in... · 7f614cd1
      HATAYAMA Daisuke authored
      vmcore: treat memory chunks referenced by PT_LOAD program header entries in page-size boundary in vmcore_list
      
      Treat memory chunks referenced by PT_LOAD program header entries in
      page-size boundary in vmcore_list.  Formally, for each range [start,
      end], we set up the corresponding vmcore object in vmcore_list to
      [rounddown(start, PAGE_SIZE), roundup(end, PAGE_SIZE)].
      
      This change affects layout of /proc/vmcore.  The gaps generated by the
      rearrangement are newly made visible to applications as holes.
      Concretely, they are two ranges [rounddown(start, PAGE_SIZE), start] and
      [end, roundup(end, PAGE_SIZE)].
      
      Suppose variable m points at a vmcore object in vmcore_list, and
      variable phdr points at the program header of PT_LOAD type the variable
      m corresponds to.  Then, pictorially:
      
        m->offset                    +---------------+
                                     | hole          |
      phdr->p_offset =               +---------------+
        m->offset + (paddr - start)  |               |\
                                     | kernel memory | phdr->p_memsz
                                     |               |/
                                     +---------------+
                                     | hole          |
        m->offset + m->size          +---------------+
      
      where m->offset and m->offset + m->size are always page-size aligned.
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7f614cd1
    • HATAYAMA Daisuke's avatar
      vmcore: allocate buffer for ELF headers on page-size alignment · f2bdacdd
      HATAYAMA Daisuke authored
      Allocate ELF headers on page-size boundary using __get_free_pages()
      instead of kmalloc().
      
      Later patch will merge PT_NOTE entries into a single unique one and
      decrease the buffer size actually used.  Keep original buffer size in
      variable elfcorebuf_sz_orig to kfree the buffer later and actually used
      buffer size with rounded up to page-size boundary in variable
      elfcorebuf_sz separately.
      
      The size of part of the ELF buffer exported from /proc/vmcore is
      elfcorebuf_sz.
      
      The merged, removed PT_NOTE entries, i.e.  the range [elfcorebuf_sz,
      elfcorebuf_sz_orig], is filled with 0.
      
      Use size of the ELF headers as an initial offset value in
      set_vmcore_list_offsets_elf{64,32} and
      process_ptload_program_headers_elf{64,32} in order to indicate that the
      offset includes the holes towards the page boundary.
      
      As a result, both set_vmcore_list_offsets_elf{64,32} have the same
      definition.  Merge them as set_vmcore_list_offsets.
      
      [akpm@linux-foundation.org: add free_elfcorebuf(), cleanups]
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f2bdacdd
    • HATAYAMA Daisuke's avatar
      vmcore: clean up read_vmcore() · b27eb186
      HATAYAMA Daisuke authored
      Rewrite part of read_vmcore() that reads objects in vmcore_list in the
      same way as part reading ELF headers, by which some duplicated and
      redundant codes are removed.
      Signed-off-by: default avatarHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Lisa Mitchell <lisa.mitchell@hp.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b27eb186
    • Andrew Morton's avatar
      include/linux/mm.h: add PAGE_ALIGNED() helper · 0fa73b86
      Andrew Morton authored
      To test whether an address is aligned to PAGE_SIZE.
      
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>,
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0fa73b86
    • Cody P Schafer's avatar
      memory_hotplug: use pgdat_resize_lock() in __offline_pages() · d702909f
      Cody P Schafer authored
      mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
      follows:
      
              * Must be held any time you expect node_start_pfn, node_present_pages
              * or node_spanned_pages stay constant.  [...]
      
      So actually hold it when we update node_present_pages in __offline_pages().
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d702909f
    • Cody P Schafer's avatar
      memory_hotplug: use pgdat_resize_lock() in online_pages() · aa47228a
      Cody P Schafer authored
      mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
      follows:
      
              * Must be held any time you expect node_start_pfn, node_present_pages
              * or node_spanned_pages stay constant.  [...]
      
      So actually hold it when we update node_present_pages in online_pages().
      Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      aa47228a
    • Cody P Schafer's avatar
      114d4b79
    • Cody P Schafer's avatar
    • Mel Gorman's avatar
      fs: nfs: inform the VM about pages being committed or unstable · f919b196
      Mel Gorman authored
      VM page reclaim uses dirty and writeback page states to determine if
      flushers are cleaning pages too slowly and that page reclaim should
      stall waiting on flushers to catch up.  Page state in NFS is a bit more
      complex and a clean page can be unreclaimable due to being unstable
      which is effectively "dirty" from the perspective of the VM from reclaim
      context.  Similarly, if the inode is currently being committed then it's
      similar to being under writeback.
      
      This patch adds a is_dirty_writeback() handled for NFS that checks if a
      pages backing inode is being committed and should be accounted as
      writeback and if a page has private state indicating that it is
      effectively dirty.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f919b196
    • Mel Gorman's avatar
      mm: vmscan: take page buffers dirty and locked state into account · b4597226
      Mel Gorman authored
      Page reclaim keeps track of dirty and under writeback pages and uses it
      to determine if wait_iff_congested() should stall or if kswapd should
      begin writing back pages.  This fails to account for buffer pages that
      can be under writeback but not PageWriteback which is the case for
      filesystems like ext3 ordered mode.  Furthermore, PageDirty buffer pages
      can have all the buffers clean and writepage does no IO so it should not
      be accounted as congested.
      
      This patch adds an address_space operation that filesystems may
      optionally use to check if a page is really dirty or really under
      writeback.  An implementation is provided for for buffer_heads is added
      and used for block operations and ext3 in ordered mode.  By default the
      page flags are obeyed.
      
      Credit goes to Jan Kara for identifying that the page flags alone are
      not sufficient for ext3 and sanity checking a number of ideas on how the
      problem could be addressed.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4597226
    • Mel Gorman's avatar
      mm: vmscan: treat pages marked for immediate reclaim as zone congestion · d04e8acd
      Mel Gorman authored
      Currently a zone will only be marked congested if the underlying BDI is
      congested but if dirty pages are spread across zones it is possible that
      an individual zone is full of dirty pages without being congested.  The
      impact is that zone gets scanned very quickly potentially reclaiming
      really clean pages.  This patch treats pages marked for immediate
      reclaim as congested for the purposes of marking a zone ZONE_CONGESTED
      and stalling in wait_iff_congested.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d04e8acd
    • Mel Gorman's avatar
      mm: vmscan: move direct reclaim wait_iff_congested into shrink_list · 8e950282
      Mel Gorman authored
      shrink_inactive_list makes decisions on whether to stall based on the
      number of dirty pages encountered.  The wait_iff_congested() call in
      shrink_page_list does no such thing and it's arbitrary.
      
      This patch moves the decision on whether to set ZONE_CONGESTED and the
      wait_iff_congested call into shrink_page_list.  This keeps all the
      decisions on whether to stall or not in the one place.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e950282
    • Mel Gorman's avatar
      mm: vmscan: set zone flags before blocking · f7ab8db7
      Mel Gorman authored
      In shrink_page_list a decision may be made to stall and flag a zone as
      ZONE_WRITEBACK so that if a large number of unqueued dirty pages are
      encountered later then the reclaimer will stall.  Set ZONE_WRITEBACK
      before potentially going to sleep so it is noticed sooner.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f7ab8db7
    • Mel Gorman's avatar
      mm: vmscan: stall page reclaim after a list of pages have been processed · b1a6f21e
      Mel Gorman authored
      Commit "mm: vmscan: Block kswapd if it is encountering pages under
      writeback" blocks page reclaim if it encounters pages under writeback
      marked for immediate reclaim.  It blocks while pages are still isolated
      from the LRU which is unnecessary.  This patch defers the blocking until
      after the isolated pages have been processed and tidies up some of the
      comments.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1a6f21e
    • Mel Gorman's avatar
      mm: vmscan: stall page reclaim and writeback pages based on dirty/writepage pages encountered · e2be15f6
      Mel Gorman authored
      Further testing of the "Reduce system disruption due to kswapd"
      discovered a few problems.  First and foremost, it's possible for pages
      under writeback to be freed which will lead to badness.  Second, as
      pages were not being swapped the file LRU was being scanned faster and
      clean file pages were being reclaimed.  In some cases this results in
      increased read IO to re-read data from disk.  Third, more pages were
      being written from kswapd context which can adversly affect IO
      performance.  Lastly, it was observed that PageDirty pages are not
      necessarily dirty on all filesystems (buffers can be clean while
      PageDirty is set and ->writepage generates no IO) and not all
      filesystems set PageWriteback when the page is being written (e.g.
      ext3).  This disconnect confuses the reclaim stalling logic.  This
      follow-up series is aimed at these problems.
      
      The tests were based on three kernels
      
      vanilla:	kernel 3.9 as that is what the current mmotm uses as a baseline
      mmotm-20130522	is mmotm as of 22nd May with "Reduce system disruption due to
      		kswapd" applied on top as per what should be in Andrew's tree
      		right now
      lessdisrupt-v7r10 is this follow-up series on top of the mmotm kernel
      
      The first test used memcached+memcachetest while some background IO was
      in progress as implemented by the parallel IO tests implement in MM
      Tests.  memcachetest benchmarks how many operations/second memcached can
      service.  It starts with no background IO on a freshly created ext4
      filesystem and then re-runs the test with larger amounts of IO in the
      background to roughly simulate a large copy in progress.  The
      expectation is that the IO should have little or no impact on
      memcachetest which is running entirely in memory.
      
      parallelio
                                                   3.9.0                       3.9.0                       3.9.0
                                                 vanilla          mm1-mmotm-20130522       mm1-lessdisrupt-v7r10
      Ops memcachetest-0M             23117.00 (  0.00%)          22780.00 ( -1.46%)          22763.00 ( -1.53%)
      Ops memcachetest-715M           23774.00 (  0.00%)          23299.00 ( -2.00%)          22934.00 ( -3.53%)
      Ops memcachetest-2385M           4208.00 (  0.00%)          24154.00 (474.00%)          23765.00 (464.76%)
      Ops memcachetest-4055M           4104.00 (  0.00%)          25130.00 (512.33%)          24614.00 (499.76%)
      Ops io-duration-0M                  0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops io-duration-715M               12.00 (  0.00%)              7.00 ( 41.67%)              6.00 ( 50.00%)
      Ops io-duration-2385M             116.00 (  0.00%)             21.00 ( 81.90%)             21.00 ( 81.90%)
      Ops io-duration-4055M             160.00 (  0.00%)             36.00 ( 77.50%)             35.00 ( 78.12%)
      Ops swaptotal-0M                    0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swaptotal-715M             140138.00 (  0.00%)             18.00 ( 99.99%)             18.00 ( 99.99%)
      Ops swaptotal-2385M            385682.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swaptotal-4055M            418029.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-0M                       0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-715M                   144.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-2385M               134227.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-4055M               125618.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops minorfaults-0M            1536429.00 (  0.00%)        1531632.00 (  0.31%)        1533541.00 (  0.19%)
      Ops minorfaults-715M          1786996.00 (  0.00%)        1612148.00 (  9.78%)        1608832.00 (  9.97%)
      Ops minorfaults-2385M         1757952.00 (  0.00%)        1614874.00 (  8.14%)        1613541.00 (  8.21%)
      Ops minorfaults-4055M         1774460.00 (  0.00%)        1633400.00 (  7.95%)        1630881.00 (  8.09%)
      Ops majorfaults-0M                  1.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops majorfaults-715M              184.00 (  0.00%)            167.00 (  9.24%)            166.00 (  9.78%)
      Ops majorfaults-2385M           24444.00 (  0.00%)            155.00 ( 99.37%)             93.00 ( 99.62%)
      Ops majorfaults-4055M           21357.00 (  0.00%)            147.00 ( 99.31%)            134.00 ( 99.37%)
      
      memcachetest is the transactions/second reported by memcachetest. In
              the vanilla kernel note that performance drops from around
              23K/sec to just over 4K/second when there is 2385M of IO going
              on in the background. With current mmotm, there is no collapse
      	in performance and with this follow-up series there is little
      	change.
      
      swaptotal is the total amount of swap traffic. With mmotm and the follow-up
      	series, the total amount of swapping is much reduced.
      
                                       3.9.0       3.9.0       3.9.0
                                     vanillamm1-mmotm-20130522mm1-lessdisrupt-v7r10
      Minor Faults                  11160152    10706748    10622316
      Major Faults                     46305         755         678
      Swap Ins                        260249           0           0
      Swap Outs                       683860          18          18
      Direct pages scanned                 0         678        2520
      Kswapd pages scanned           6046108     8814900     1639279
      Kswapd pages reclaimed         1081954     1172267     1094635
      Direct pages reclaimed               0         566        2304
      Kswapd efficiency                  17%         13%         66%
      Kswapd velocity               5217.560    7618.953    1414.879
      Direct efficiency                 100%         83%         91%
      Direct velocity                  0.000       0.586       2.175
      Percentage direct scans             0%          0%          0%
      Zone normal velocity          5105.086    6824.681     671.158
      Zone dma32 velocity            112.473     794.858     745.896
      Zone dma velocity                0.000       0.000       0.000
      Page writes by reclaim     1929612.000 6861768.000   32821.000
      Page writes file               1245752     6861750       32803
      Page writes anon                683860          18          18
      Page reclaim immediate            7484          40         239
      Sector Reads                   1130320       93996       86900
      Sector Writes                 13508052    10823500    11804436
      Page rescued immediate               0           0           0
      Slabs scanned                    33536       27136       18560
      Direct inode steals                  0           0           0
      Kswapd inode steals               8641        1035           0
      Kswapd skipped wait                  0           0           0
      THP fault alloc                      8          37          33
      THP collapse alloc                 508         552         515
      THP splits                          24           1           1
      THP fault fallback                   0           0           0
      THP collapse fail                    0           0           0
      
      There are a number of observations to make here
      
      1. Swap outs are almost eliminated. Swap ins are 0 indicating that the
         pages swapped were really unused anonymous pages. Related to that,
         major faults are much reduced.
      
      2. kswapd efficiency was impacted by the initial series but with these
         follow-up patches, the efficiency is now at 66% indicating that far
         fewer pages were skipped during scanning due to dirty or writeback
         pages.
      
      3. kswapd velocity is reduced indicating that fewer pages are being scanned
         with the follow-up series as kswapd now stalls when the tail of the
         LRU queue is full of unqueued dirty pages. The stall gives flushers a
         chance to catch-up so kswapd can reclaim clean pages when it wakes
      
      4. In light of Zlatko's recent reports about zone scanning imbalances,
         mmtests now reports scanning velocity on a per-zone basis. With mainline,
         you can see that the scanning activity is dominated by the Normal
         zone with over 45 times more scanning in Normal than the DMA32 zone.
         With the series currently in mmotm, the ratio is slightly better but it
         is still the case that the bulk of scanning is in the highest zone. With
         this follow-up series, the ratio of scanning between the Normal and
         DMA32 zone is roughly equal.
      
      5. As Dave Chinner observed, the current patches in mmotm increased the
         number of pages written from kswapd context which is expected to adversly
         impact IO performance. With the follow-up patches, far fewer pages are
         written from kswapd context than the mainline kernel
      
      6. With the series in mmotm, fewer inodes were reclaimed by kswapd. With
         the follow-up series, there is less slab shrinking activity and no inodes
         were reclaimed.
      
      7. Note that "Sectors Read" is drastically reduced implying that the source
         data being used for the IO is not being aggressively discarded due to
         page reclaim skipping over dirty pages and reclaiming clean pages. Note
         that the reducion in reads could also be due to inode data not being
         re-read from disk after a slab shrink.
      
                             3.9.0       3.9.0       3.9.0
                           vanillamm1-mmotm-20130522mm1-lessdisrupt-v7r10
      Mean sda-avgqz        166.99       32.09       33.44
      Mean sda-await        853.64      192.76      185.43
      Mean sda-r_await        6.31        9.24        5.97
      Mean sda-w_await     2992.81      202.65      192.43
      Max  sda-avgqz       1409.91      718.75      698.98
      Max  sda-await       6665.74     3538.00     3124.23
      Max  sda-r_await       58.96      111.95       58.00
      Max  sda-w_await    28458.94     3977.29     3148.61
      
      In light of the changes in writes from reclaim context, the number of
      reads and Dave Chinner's concerns about IO performance I took a closer
      look at the IO stats for the test disk. Few observations
      
      1. The average queue size is reduced by the initial series and roughly
         the same with this follow up.
      
      2. Average wait times for writes are reduced and as the IO
         is completing faster it at least implies that the gain is because
         flushers are writing the files efficiently instead of page reclaim
         getting in the way.
      
      3. The reduction in maximum write latency is staggering. 28 seconds down
         to 3 seconds.
      
      Jan Kara asked how NFS is affected by all of this. Unstable pages can
      be taken into account as one of the patches in the series shows but it
      is still the case that filesystems with unusual handling of dirty or
      writeback could still be treated better.
      
      Tests like postmark, fsmark and largedd showed up nothing useful. On my test
      setup, pages are simply not being written back from reclaim context with or
      without the patches and there are no changes in performance. My test setup
      probably is just not strong enough network-wise to be really interesting.
      
      I ran a longer-lived memcached test with IO going to NFS instead of a local disk
      
      parallelio
                                                   3.9.0                       3.9.0                       3.9.0
                                                 vanilla          mm1-mmotm-20130522       mm1-lessdisrupt-v7r10
      Ops memcachetest-0M             23323.00 (  0.00%)          23241.00 ( -0.35%)          23321.00 ( -0.01%)
      Ops memcachetest-715M           25526.00 (  0.00%)          24763.00 ( -2.99%)          23242.00 ( -8.95%)
      Ops memcachetest-2385M           8814.00 (  0.00%)          26924.00 (205.47%)          23521.00 (166.86%)
      Ops memcachetest-4055M           5835.00 (  0.00%)          26827.00 (359.76%)          25560.00 (338.05%)
      Ops io-duration-0M                  0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops io-duration-715M               65.00 (  0.00%)             71.00 ( -9.23%)             11.00 ( 83.08%)
      Ops io-duration-2385M             129.00 (  0.00%)             94.00 ( 27.13%)             53.00 ( 58.91%)
      Ops io-duration-4055M             301.00 (  0.00%)            100.00 ( 66.78%)            108.00 ( 64.12%)
      Ops swaptotal-0M                    0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swaptotal-715M              14394.00 (  0.00%)            949.00 ( 93.41%)             63.00 ( 99.56%)
      Ops swaptotal-2385M            401483.00 (  0.00%)          24437.00 ( 93.91%)          30118.00 ( 92.50%)
      Ops swaptotal-4055M            554123.00 (  0.00%)          35688.00 ( 93.56%)          63082.00 ( 88.62%)
      Ops swapin-0M                       0.00 (  0.00%)              0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-715M                  4522.00 (  0.00%)            560.00 ( 87.62%)             63.00 ( 98.61%)
      Ops swapin-2385M               169861.00 (  0.00%)           5026.00 ( 97.04%)          13917.00 ( 91.81%)
      Ops swapin-4055M               192374.00 (  0.00%)          10056.00 ( 94.77%)          25729.00 ( 86.63%)
      Ops minorfaults-0M            1445969.00 (  0.00%)        1520878.00 ( -5.18%)        1454024.00 ( -0.56%)
      Ops minorfaults-715M          1557288.00 (  0.00%)        1528482.00 (  1.85%)        1535776.00 (  1.38%)
      Ops minorfaults-2385M         1692896.00 (  0.00%)        1570523.00 (  7.23%)        1559622.00 (  7.87%)
      Ops minorfaults-4055M         1654985.00 (  0.00%)        1581456.00 (  4.44%)        1596713.00 (  3.52%)
      Ops majorfaults-0M                  0.00 (  0.00%)              1.00 (-99.00%)              0.00 (  0.00%)
      Ops majorfaults-715M              763.00 (  0.00%)            265.00 ( 65.27%)             75.00 ( 90.17%)
      Ops majorfaults-2385M           23861.00 (  0.00%)            894.00 ( 96.25%)           2189.00 ( 90.83%)
      Ops majorfaults-4055M           27210.00 (  0.00%)           1569.00 ( 94.23%)           4088.00 ( 84.98%)
      
      1. Performance does not collapse due to IO which is good. IO is also completing
         faster. Note with mmotm, IO completes in a third of the time and faster again
         with this series applied
      
      2. Swapping is reduced, although not eliminated. The figures for the follow-up
         look bad but it does vary a bit as the stalling is not perfect for nfs
         or filesystems like ext3 with unusual handling of dirty and writeback
         pages
      
      3. There are swapins, particularly with larger amounts of IO indicating
         that active pages are being reclaimed. However, the number of much
         reduced.
      
                                       3.9.0       3.9.0       3.9.0
                                     vanillamm1-mmotm-20130522mm1-lessdisrupt-v7r10
      Minor Faults                  36339175    35025445    35219699
      Major Faults                    310964       27108       51887
      Swap Ins                       2176399      173069      333316
      Swap Outs                      3344050      357228      504824
      Direct pages scanned              8972       77283       43242
      Kswapd pages scanned          20899983     8939566    14772851
      Kswapd pages reclaimed         6193156     5172605     5231026
      Direct pages reclaimed            8450       73802       39514
      Kswapd efficiency                  29%         57%         35%
      Kswapd velocity               3929.743    1847.499    3058.840
      Direct efficiency                  94%         95%         91%
      Direct velocity                  1.687      15.972       8.954
      Percentage direct scans             0%          0%          0%
      Zone normal velocity          3721.907     939.103    2185.142
      Zone dma32 velocity            209.522     924.368     882.651
      Zone dma velocity                0.000       0.000       0.000
      Page writes by reclaim     4082185.000  526319.000  537114.000
      Page writes file                738135      169091       32290
      Page writes anon               3344050      357228      504824
      Page reclaim immediate            9524         170     5595843
      Sector Reads                   8909900      861192     1483680
      Sector Writes                 13428980     1488744     2076800
      Page rescued immediate               0           0           0
      Slabs scanned                    38016       31744       28672
      Direct inode steals                  0           0           0
      Kswapd inode steals                424           0           0
      Kswapd skipped wait                  0           0           0
      THP fault alloc                     14          15         119
      THP collapse alloc                1767        1569        1618
      THP splits                          30          29          25
      THP fault fallback                   0           0           0
      THP collapse fail                    8           5           0
      Compaction stalls                   17          41         100
      Compaction success                   7          31          95
      Compaction failures                 10          10           5
      Page migrate success              7083       22157       62217
      Page migrate failure                 0           0           0
      Compaction pages isolated        14847       48758      135830
      Compaction migrate scanned       18328       48398      138929
      Compaction free scanned        2000255      355827     1720269
      Compaction cost                      7          24          68
      
      I guess the main takeaway again is the much reduced page writes
      from reclaim context and reduced reads.
      
                             3.9.0       3.9.0       3.9.0
                           vanillamm1-mmotm-20130522mm1-lessdisrupt-v7r10
      Mean sda-avgqz         23.58        0.35        0.44
      Mean sda-await        133.47       15.72       15.46
      Mean sda-r_await        4.72        4.69        3.95
      Mean sda-w_await      507.69       28.40       33.68
      Max  sda-avgqz        680.60       12.25       23.14
      Max  sda-await       3958.89      221.83      286.22
      Max  sda-r_await       63.86       61.23       67.29
      Max  sda-w_await    11710.38      883.57     1767.28
      
      And as before, write wait times are much reduced.
      
      This patch:
      
      The patch "mm: vmscan: Have kswapd writeback pages based on dirty pages
      encountered, not priority" decides whether to writeback pages from reclaim
      context based on the number of dirty pages encountered.  This situation is
      flagged too easily and flushers are not given the chance to catch up
      resulting in more pages being written from reclaim context and potentially
      impacting IO performance.  The check for PageWriteback is also misplaced
      as it happens within a PageDirty check which is nonsense as the dirty may
      have been cleared for IO.  The accounting is updated very late and pages
      that are already under writeback, were reactivated, could not unmapped or
      could not be released are all missed.  Similarly, a page is considered
      congested for reasons other than being congested and pages that cannot be
      written out in the correct context are skipped.  Finally, it considers
      stalling and writing back filesystem pages due to encountering dirty
      anonymous pages at the tail of the LRU which is dumb.
      
      This patch causes kswapd to begin writing filesystem pages from reclaim
      context only if page reclaim found that all filesystem pages at the tail
      of the LRU were unqueued dirty pages.  Before it starts writing filesystem
      pages, it will stall to give flushers a chance to catch up.  The decision
      on whether wait_iff_congested is also now determined by dirty filesystem
      pages only.  Congested pages are based on whether the underlying BDI is
      congested regardless of the context of the reclaiming process.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Zlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e2be15f6
    • Mel Gorman's avatar
      mm: vmscan: move logic from balance_pgdat() to kswapd_shrink_zone() · 7c954f6d
      Mel Gorman authored
      balance_pgdat() is very long and some of the logic can and should be
      internal to kswapd_shrink_zone().  Move it so the flow of
      balance_pgdat() is marginally easier to follow.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7c954f6d
    • Mel Gorman's avatar
      mm: vmscan: check if kswapd should writepage once per pgdat scan · b7ea3c41
      Mel Gorman authored
      Currently kswapd checks if it should start writepage as it shrinks each
      zone without taking into consideration if the zone is balanced or not.
      This is not wrong as such but it does not make much sense either.  This
      patch checks once per pgdat scan if kswapd should be writing pages.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b7ea3c41
    • Mel Gorman's avatar
      mm: vmscan: block kswapd if it is encountering pages under writeback · 283aba9f
      Mel Gorman authored
      Historically, kswapd used to congestion_wait() at higher priorities if
      it was not making forward progress.  This made no sense as the failure
      to make progress could be completely independent of IO.  It was later
      replaced by wait_iff_congested() and removed entirely by commit 258401a6
      (mm: don't wait on congested zones in balance_pgdat()) as it was
      duplicating logic in shrink_inactive_list().
      
      This is problematic.  If kswapd encounters many pages under writeback
      and it continues to scan until it reaches the high watermark then it
      will quickly skip over the pages under writeback and reclaim clean young
      pages or push applications out to swap.
      
      The use of wait_iff_congested() is not suited to kswapd as it will only
      stall if the underlying BDI is really congested or a direct reclaimer
      was unable to write to the underlying BDI.  kswapd bypasses the BDI
      congestion as it sets PF_SWAPWRITE but even if this was taken into
      account then it would cause direct reclaimers to stall on writeback
      which is not desirable.
      
      This patch sets a ZONE_WRITEBACK flag if direct reclaim or kswapd is
      encountering too many pages under writeback.  If this flag is set and
      kswapd encounters a PageReclaim page under writeback then it'll assume
      that the LRU lists are being recycled too quickly before IO can complete
      and block waiting for some IO to complete.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      283aba9f
    • Mel Gorman's avatar
      mm: vmscan: have kswapd writeback pages based on dirty pages encountered, not priority · d43006d5
      Mel Gorman authored
      Currently kswapd queues dirty pages for writeback if scanning at an
      elevated priority but the priority kswapd scans at is not related to the
      number of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten
      kswapd priority loop", the priority is related to the size of the LRU
      and the zone watermark which is no indication as to whether kswapd
      should write pages or not.
      
      This patch tracks if an excessive number of unqueued dirty pages are
      being encountered at the end of the LRU.  If so, it indicates that dirty
      pages are being recycled before flusher threads can clean them and flags
      the zone so that kswapd will start writing pages until the zone is
      balanced.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d43006d5
    • Mel Gorman's avatar
      mm: vmscan: do not allow kswapd to scan at maximum priority · 9aa41348
      Mel Gorman authored
      Page reclaim at priority 0 will scan the entire LRU as priority 0 is
      considered to be a near OOM condition.  Kswapd can reach priority 0
      quite easily if it is encountering a large number of pages it cannot
      reclaim such as pages under writeback.  When this happens, kswapd
      reclaims very aggressively even though there may be no real risk of
      allocation failure or OOM.
      
      This patch prevents kswapd reaching priority 0 and trying to reclaim the
      world.  Direct reclaimers will still reach priority 0 in the event of an
      OOM situation.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9aa41348
    • Mel Gorman's avatar
      mm: vmscan: decide whether to compact the pgdat based on reclaim progress · 2ab44f43
      Mel Gorman authored
      In the past, kswapd makes a decision on whether to compact memory after
      the pgdat was considered balanced.  This more or less worked but it is
      late to make such a decision and does not fit well now that kswapd makes
      a decision whether to exit the zone scanning loop depending on reclaim
      progress.
      
      This patch will compact a pgdat if at least the requested number of
      pages were reclaimed from unbalanced zones for a given priority.  If any
      zone is currently balanced, kswapd will not call compaction as it is
      expected the necessary pages are already available.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2ab44f43
    • Mel Gorman's avatar
      mm: vmscan: flatten kswapd priority loop · b8e83b94
      Mel Gorman authored
      kswapd stops raising the scanning priority when at least
      SWAP_CLUSTER_MAX pages have been reclaimed or the pgdat is considered
      balanced.  It then rechecks if it needs to restart at DEF_PRIORITY and
      whether high-order reclaim needs to be reset.  This is not wrong per-se
      but it is confusing to follow and forcing kswapd to stay at DEF_PRIORITY
      may require several restarts before it has scanned enough pages to meet
      the high watermark even at 100% efficiency.  This patch irons out the
      logic a bit by controlling when priority is raised and removing the
      "goto loop_again".
      
      This patch has kswapd raise the scanning priority until it is scanning
      enough pages that it could meet the high watermark in one shrink of the
      LRU lists if it is able to reclaim at 100% efficiency.  It will not
      raise the scanning prioirty higher unless it is failing to reclaim any
      pages.
      
      To avoid infinite looping for high-order allocation requests kswapd will
      not reclaim for high-order allocations when it has reclaimed at least
      twice the number of pages as the allocation request.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b8e83b94
    • Mel Gorman's avatar
      mm: vmscan: obey proportional scanning requirements for kswapd · e82e0561
      Mel Gorman authored
      Simplistically, the anon and file LRU lists are scanned proportionally
      depending on the value of vm.swappiness although there are other factors
      taken into account by get_scan_count().  The patch "mm: vmscan: Limit
      the number of pages kswapd reclaims" limits the number of pages kswapd
      reclaims but it breaks this proportional scanning and may evenly shrink
      anon/file LRUs regardless of vm.swappiness.
      
      This patch preserves the proportional scanning and reclaim.  It does
      mean that kswapd will reclaim more than requested but the number of
      pages will be related to the high watermark.
      
      [mhocko@suse.cz: Correct proportional reclaim for memcg and simplify]
      [kamezawa.hiroyu@jp.fujitsu.com: Recalculate scan based on target]
      [hannes@cmpxchg.org: Account for already scanned pages properly]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e82e0561
    • Mel Gorman's avatar
      mm: vmscan: limit the number of pages kswapd reclaims at each priority · 75485363
      Mel Gorman authored
      This series does not fix all the current known problems with reclaim but
      it addresses one important swapping bug when there is background IO.
      
      Changelog since V3
       - Drop the slab shrink changes in light of Glaubers series and
         discussions highlighted that there were a number of potential
         problems with the patch.					(mel)
       - Rebased to 3.10-rc1
      
      Changelog since V2
       - Preserve ratio properly for proportional scanning		(kamezawa)
      
      Changelog since V1
       - Rename ZONE_DIRTY to ZONE_TAIL_LRU_DIRTY			(andi)
       - Reformat comment in shrink_page_list				(andi)
       - Clarify some comments					(dhillf)
       - Rework how the proportional scanning is preserved
       - Add PageReclaim check before kswapd starts writeback
       - Reset sc.nr_reclaimed on every full zone scan
      
      Kswapd and page reclaim behaviour has been screwy in one way or the
      other for a long time.  Very broadly speaking it worked in the far past
      because machines were limited in memory so it did not have that many
      pages to scan and it stalled congestion_wait() frequently to prevent it
      going completely nuts.  In recent times it has behaved very
      unsatisfactorily with some of the problems compounded by the removal of
      stall logic and the introduction of transparent hugepage support with
      high-order reclaims.
      
      There are many variations of bugs that are rooted in this area.  One
      example is reports of a large copy operations or backup causing the
      machine to grind to a halt or applications pushed to swap.  Sometimes in
      low memory situations a large percentage of memory suddenly gets
      reclaimed.  In other cases an application starts and kswapd hits 100%
      CPU usage for prolonged periods of time and so on.  There is now talk of
      introducing features like an extra free kbytes tunable to work around
      aspects of the problem instead of trying to deal with it.  It's
      compounded by the problem that it can be very workload and machine
      specific.
      
      This series aims at addressing some of the worst of these problems
      without attempting to fundmentally alter how page reclaim works.
      
      Patches 1-2 limits the number of pages kswapd reclaims while still obeying
      	the anon/file proportion of the LRUs it should be scanning.
      
      Patches 3-4 control how and when kswapd raises its scanning priority and
      	deletes the scanning restart logic which is tricky to follow.
      
      Patch 5 notes that it is too easy for kswapd to reach priority 0 when
      	scanning and then reclaim the world. Down with that sort of thing.
      
      Patch 6 notes that kswapd starts writeback based on scanning priority which
      	is not necessarily related to dirty pages. It will have kswapd
      	writeback pages if a number of unqueued dirty pages have been
      	recently encountered at the tail of the LRU.
      
      Patch 7 notes that sometimes kswapd should stall waiting on IO to complete
      	to reduce LRU churn and the likelihood that it'll reclaim young
      	clean pages or push applications to swap. It will cause kswapd
      	to block on IO if it detects that pages being reclaimed under
      	writeback are recycling through the LRU before the IO completes.
      
      Patchies 8-9 are cosmetic but balance_pgdat() is easier to follow after they
      	are applied.
      
      This was tested using memcached+memcachetest while some background IO
      was in progress as implemented by the parallel IO tests implement in MM
      Tests.
      
      memcachetest benchmarks how many operations/second memcached can service
      and it is run multiple times.  It starts with no background IO and then
      re-runs the test with larger amounts of IO in the background to roughly
      simulate a large copy in progress.  The expectation is that the IO
      should have little or no impact on memcachetest which is running
      entirely in memory.
      
                                              3.10.0-rc1                  3.10.0-rc1
                                                 vanilla            lessdisrupt-v4
      Ops memcachetest-0M             22155.00 (  0.00%)          22180.00 (  0.11%)
      Ops memcachetest-715M           22720.00 (  0.00%)          22355.00 ( -1.61%)
      Ops memcachetest-2385M           3939.00 (  0.00%)          23450.00 (495.33%)
      Ops memcachetest-4055M           3628.00 (  0.00%)          24341.00 (570.92%)
      Ops io-duration-0M                  0.00 (  0.00%)              0.00 (  0.00%)
      Ops io-duration-715M               12.00 (  0.00%)              7.00 ( 41.67%)
      Ops io-duration-2385M             118.00 (  0.00%)             21.00 ( 82.20%)
      Ops io-duration-4055M             162.00 (  0.00%)             36.00 ( 77.78%)
      Ops swaptotal-0M                    0.00 (  0.00%)              0.00 (  0.00%)
      Ops swaptotal-715M             140134.00 (  0.00%)             18.00 ( 99.99%)
      Ops swaptotal-2385M            392438.00 (  0.00%)              0.00 (  0.00%)
      Ops swaptotal-4055M            449037.00 (  0.00%)          27864.00 ( 93.79%)
      Ops swapin-0M                       0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-715M                     0.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-2385M               148031.00 (  0.00%)              0.00 (  0.00%)
      Ops swapin-4055M               135109.00 (  0.00%)              0.00 (  0.00%)
      Ops minorfaults-0M            1529984.00 (  0.00%)        1530235.00 ( -0.02%)
      Ops minorfaults-715M          1794168.00 (  0.00%)        1613750.00 ( 10.06%)
      Ops minorfaults-2385M         1739813.00 (  0.00%)        1609396.00 (  7.50%)
      Ops minorfaults-4055M         1754460.00 (  0.00%)        1614810.00 (  7.96%)
      Ops majorfaults-0M                  0.00 (  0.00%)              0.00 (  0.00%)
      Ops majorfaults-715M              185.00 (  0.00%)            180.00 (  2.70%)
      Ops majorfaults-2385M           24472.00 (  0.00%)            101.00 ( 99.59%)
      Ops majorfaults-4055M           22302.00 (  0.00%)            229.00 ( 98.97%)
      
      Note how the vanilla kernels performance collapses when there is enough
      IO taking place in the background.  This drop in performance is part of
      what users complain of when they start backups.  Note how the swapin and
      major fault figures indicate that processes were being pushed to swap
      prematurely.  With the series applied, there is no noticable performance
      drop and while there is still some swap activity, it's tiny.
      
      20 iterations of this test were run in total and averaged.  Every 5
      iterations, additional IO was generated in the background using dd to
      measure how the workload was impacted.  The 0M, 715M, 2385M and 4055M
      subblock refer to the amount of IO going on in the background at each
      iteration.  So memcachetest-2385M is reporting how many
      transactions/second memcachetest recorded on average over 5 iterations
      while there was 2385M of IO going on in the ground.  There are six
      blocks of information reported here
      
      memcachetest is the transactions/second reported by memcachetest. In
      	the vanilla kernel note that performance drops from around
      	22K/sec to just under 4K/second when there is 2385M of IO going
      	on in the background. This is one type of performance collapse
      	users complain about if a large cp or backup starts in the
      	background
      
      io-duration refers to how long it takes for the background IO to
      	complete. It's showing that with the patched kernel that the IO
      	completes faster while not interfering with the memcache
      	workload
      
      swaptotal is the total amount of swap traffic. With the patched kernel,
      	the total amount of swapping is much reduced although it is
      	still not zero.
      
      swapin in this case is an indication as to whether we are swap trashing.
      	The closer the swapin/swapout ratio is to 1, the worse the
      	trashing is.  Note with the patched kernel that there is no swapin
      	activity indicating that all the pages swapped were really inactive
      	unused pages.
      
      minorfaults are just minor faults. An increased number of minor faults
      	can indicate that page reclaim is unmapping the pages but not
      	swapping them out before they are faulted back in. With the
      	patched kernel, there is only a small change in minor faults
      
      majorfaults are just major faults in the target workload and a high
      	number can indicate that a workload is being prematurely
      	swapped. With the patched kernel, major faults are much reduced. As
      	there are no swapin's recorded so it's not being swapped. The likely
      	explanation is that that libraries or configuration files used by
      	the workload during startup get paged out by the background IO.
      
      Overall with the series applied, there is no noticable performance drop
      due to background IO and while there is still some swap activity, it's
      tiny and the lack of swapins imply that the swapped pages were inactive
      and unused.
      
                                  3.10.0-rc1  3.10.0-rc1
                                     vanilla lessdisrupt-v4
      Page Ins                       1234608      101892
      Page Outs                     12446272    11810468
      Swap Ins                        283406           0
      Swap Outs                       698469       27882
      Direct pages scanned                 0      136480
      Kswapd pages scanned           6266537     5369364
      Kswapd pages reclaimed         1088989      930832
      Direct pages reclaimed               0      120901
      Kswapd efficiency                  17%         17%
      Kswapd velocity               5398.371    4635.115
      Direct efficiency                 100%         88%
      Direct velocity                  0.000     117.817
      Percentage direct scans             0%          2%
      Page writes by reclaim         1655843     4009929
      Page writes file                957374     3982047
      Page writes anon                698469       27882
      Page reclaim immediate            5245        1745
      Page rescued immediate               0           0
      Slabs scanned                    33664       25216
      Direct inode steals                  0           0
      Kswapd inode steals              19409         778
      Kswapd skipped wait                  0           0
      THP fault alloc                     35          30
      THP collapse alloc                 472         401
      THP splits                          27          22
      THP fault fallback                   0           0
      THP collapse fail                    0           1
      Compaction stalls                    0           4
      Compaction success                   0           0
      Compaction failures                  0           4
      Page migrate success                 0           0
      Page migrate failure                 0           0
      Compaction pages isolated            0           0
      Compaction migrate scanned           0           0
      Compaction free scanned              0           0
      Compaction cost                      0           0
      NUMA PTE updates                     0           0
      NUMA hint faults                     0           0
      NUMA hint local faults               0           0
      NUMA pages migrated                  0           0
      AutoNUMA cost                        0           0
      
      Unfortunately, note that there is a small amount of direct reclaim due to
      kswapd no longer reclaiming the world.  ftrace indicates that the direct
      reclaim stalls are mostly harmless with the vast bulk of the stalls
      incurred by dd
      
           23 tclsh-3367
           38 memcachetest-13733
           49 memcachetest-12443
           57 tee-3368
         1541 dd-13826
         1981 dd-12539
      
      A consequence of the direct reclaim for dd is that the processes for the
      IO workload may show a higher system CPU usage.  There is also a risk that
      kswapd not reclaiming the world may mean that it stays awake balancing
      zones, does not stall on the appropriate events and continually scans
      pages it cannot reclaim consuming CPU.  This will be visible as continued
      high CPU usage but in my own tests I only saw a single spike lasting less
      than a second and I did not observe any problems related to reclaim while
      running the series on my desktop.
      
      This patch:
      
      The number of pages kswapd can reclaim is bound by the number of pages it
      scans which is related to the size of the zone and the scanning priority.
      In many cases the priority remains low because it's reset every
      SWAP_CLUSTER_MAX reclaimed pages but in the event kswapd scans a large
      number of pages it cannot reclaim, it will raise the priority and
      potentially discard a large percentage of the zone as sc->nr_to_reclaim is
      ULONG_MAX.  The user-visible effect is a reclaim "spike" where a large
      percentage of memory is suddenly freed.  It would be bad enough if this
      was just unused memory but because of how anon/file pages are balanced it
      is possible that applications get pushed to swap unnecessarily.
      
      This patch limits the number of pages kswapd will reclaim to the high
      watermark.  Reclaim will still overshoot due to it not being a hard limit
      as shrink_lruvec() will ignore the sc.nr_to_reclaim at DEF_PRIORITY but it
      prevents kswapd reclaiming the world at higher priorities.  The number of
      pages it reclaims is not adjusted for high-order allocations as kswapd
      will reclaim excessively if it is to balance zones for high-order
      allocations.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Tested-by: default avatarZlatko Calusic <zcalusic@bitsync.net>
      Cc: dormando <dormando@rydia.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75485363
    • Cody P Schafer's avatar
      mm/page_alloc: don't re-init pageset in zone_pcp_update() · 169f6c19
      Cody P Schafer authored
      When memory hotplug is triggered, we call pageset_init() on
      per-cpu-pagesets which both contain pages and are in use, causing both the
      leakage of those pages and (potentially) bad behaviour if a page is
      allocated from a pageset while it is being cleared.
      
      Avoid this by factoring out pageset_set_high_and_batch() (which contains
      all needed logic too set a pageset's ->high and ->batch inrespective of
      system state) from zone_pageset_init() and using the new
      pageset_set_high_and_batch() instead of zone_pageset_init() in
      zone_pcp_update().
      Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      169f6c19
    • Cody P Schafer's avatar
      mm/page_alloc: rename setup_pagelist_highmark() to match naming of pageset_set_batch() · 3664033c
      Cody P Schafer authored
      Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: Gilad Ben-Yossef <gilad@benyossef.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3664033c
    • Cody P Schafer's avatar
      mm/page_alloc: in zone_pcp_update(), uze zone_pageset_init() · 737af4c0
      Cody P Schafer authored
      Previously, zone_pcp_update() called pageset_set_batch() directly,
      essentially assuming that percpu_pagelist_fraction == 0.
      
      Correct this by calling zone_pageset_init(), which chooses the
      appropriate ->batch and ->high calculations.
      Signed-off-by: default avatarCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: Gilad Ben-Yossef <gilad@benyossef.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      737af4c0