1. 15 Jan, 2016 40 commits
    • Daniel Cashman's avatar
      mm: mmap: add new /proc tunable for mmap_base ASLR · d07e2259
      Daniel Cashman authored
      Address Space Layout Randomization (ASLR) provides a barrier to
      exploitation of user-space processes in the presence of security
      vulnerabilities by making it more difficult to find desired code/data
      which could help an attack.  This is done by adding a random offset to
      the location of regions in the process address space, with a greater
      range of potential offset values corresponding to better protection/a
      larger search-space for brute force, but also to greater potential for
      fragmentation.
      
      The offset added to the mmap_base address, which provides the basis for
      the majority of the mappings for a process, is set once on process exec
      in arch_pick_mmap_layout() and is done via hard-coded per-arch values,
      which reflect, hopefully, the best compromise for all systems.  The
      trade-off between increased entropy in the offset value generation and
      the corresponding increased variability in address space fragmentation
      is not absolute, however, and some platforms may tolerate higher amounts
      of entropy.  This patch introduces both new Kconfig values and a sysctl
      interface which may be used to change the amount of entropy used for
      offset generation on a system.
      
      The direct motivation for this change was in response to the
      libstagefright vulnerabilities that affected Android, specifically to
      information provided by Google's project zero at:
      
        http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html
      
      The attack presented therein, by Google's project zero, specifically
      targeted the limited randomness used to generate the offset added to the
      mmap_base address in order to craft a brute-force-based attack.
      Concretely, the attack was against the mediaserver process, which was
      limited to respawning every 5 seconds, on an arm device.  The hard-coded
      8 bits used resulted in an average expected success rate of defeating
      the mmap ASLR after just over 10 minutes (128 tries at 5 seconds a
      piece).  With this patch, and an accompanying increase in the entropy
      value to 16 bits, the same attack would take an average expected time of
      over 45 hours (32768 tries), which makes it both less feasible and more
      likely to be noticed.
      
      The introduced Kconfig and sysctl options are limited by per-arch
      minimum and maximum values, the minimum of which was chosen to match the
      current hard-coded value and the maximum of which was chosen so as to
      give the greatest flexibility without generating an invalid mmap_base
      address, generally a 3-4 bits less than the number of bits in the
      user-space accessible virtual address space.
      
      When decided whether or not to change the default value, a system
      developer should consider that mmap_base address could be placed
      anywhere up to 2^(value) bits away from the non-randomized location,
      which would introduce variable-sized areas above and below the mmap_base
      address such that the maximum vm_area_struct size may be reduced,
      preventing very large allocations.
      
      This patch (of 4):
      
      ASLR only uses as few as 8 bits to generate the random offset for the
      mmap base address on 32 bit architectures.  This value was chosen to
      prevent a poorly chosen value from dividing the address space in such a
      way as to prevent large allocations.  This may not be an issue on all
      platforms.  Allow the specification of a minimum number of bits so that
      platforms desiring greater ASLR protection may determine where to place
      the trade-off.
      Signed-off-by: default avatarDaniel Cashman <dcashman@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d07e2259
    • Piotr Kwapulinski's avatar
      mm/mmap.c: remove incorrect MAP_FIXED flag comparison from mmap_region · bc36f701
      Piotr Kwapulinski authored
      The following flag comparison in mmap_region makes no sense:
      
          if (!(vm_flags & MAP_FIXED))
              return -ENOMEM;
      
      The condition is always false and thus the above "return -ENOMEM" is
      never executed.  The vm_flags must not be compared with MAP_FIXED flag.
      The vm_flags may only be compared with VM_* flags.  MAP_FIXED has the
      same value as VM_MAYREAD.
      
      Hitting the rlimit is a slow path and find_vma_intersection should
      realize that there is no overlapping VMA for !MAP_FIXED case pretty
      quickly.
      Signed-off-by: default avatarPiotr Kwapulinski <kwapulinski.piotr@gmail.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bc36f701
    • Michal Hocko's avatar
      mm, vmscan: consider isolated pages in zone_reclaimable_pages · 9f6c399d
      Michal Hocko authored
      zone_reclaimable_pages counts how many pages are reclaimable in the
      given zone.  This currently includes all pages on file lrus and anon
      lrus if there is an available swap storage.  We do not consider
      NR_ISOLATED_{ANON,FILE} counters though which is not correct because
      these counters reflect temporarily isolated pages which are still
      reclaimable because they either get back to their LRU or get freed
      either by the page reclaim or page migration.
      
      The number of these pages might be sufficiently high to confuse users of
      zone_reclaimable_pages (e.g.  mbind can migrate large ranges of memory
      at once).
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Suggested-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9f6c399d
    • Andrew Morton's avatar
      fs/block_dev.c:bdev_write_page(): use blk_queue_enter(..., GFP_NOIO) · b832861c
      Andrew Morton authored
      bdev_write_page() is used by swapout and by writepage where we cannot
      use __GFP_FS or __GFP_IO.  So it is misleading to mention GFP_KERNEL
      here.
      
      blk_queue_enter() only actually looks at __GFP_DIRECT_RECLAIM, so no
      bugs were harmed in the making of this patch.
      
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b832861c
    • Vladimir Davydov's avatar
      memcg: do not allow to disable tcp accounting after limit is set · 9ee11ba4
      Vladimir Davydov authored
      There are two bits defined for cg_proto->flags - MEMCG_SOCK_ACTIVATED
      and MEMCG_SOCK_ACTIVE - both are set in tcp_update_limit, but the former
      is never cleared while the latter can be cleared by unsetting the limit.
      This allows to disable tcp socket accounting for new sockets after it
      was enabled by writing -1 to memory.kmem.tcp.limit_in_bytes while still
      guaranteeing that memcg_socket_limit_enabled static key will be
      decremented on memcg destruction.
      
      This functionality looks dubious, because it is not clear what a use
      case would be.  By enabling tcp accounting a user accepts the price.  If
      they then find the performance degradation unacceptable, they can always
      restart their workload with tcp accounting disabled.  It does not seem
      there is any need to flip it while the workload is running.
      
      Besides, it contradicts to how kmem accounting API works: writing
      whatever to memory.kmem.limit_in_bytes enables kmem accounting for the
      cgroup in question, after which it cannot be disabled.  Therefore one
      might expect that writing -1 to memory.kmem.tcp.limit_in_bytes just
      enables socket accounting w/o limiting it, which might be useful by
      itself, but it isn't true.
      
      Since this API peculiarity is not documented anywhere, I propose to drop
      it.  This will allow to simplify the code by dropping cg_proto->flags.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9ee11ba4
    • Vladimir Davydov's avatar
      vmscan: do not force-scan file lru if its absolute size is small · 316bda0e
      Vladimir Davydov authored
      We assume there is enough inactive page cache if the size of inactive
      file lru is greater than the size of active file lru, in which case we
      force-scan file lru ignoring anonymous pages.  While this logic works
      fine when there are plenty of page cache pages, it fails if the size of
      file lru is small (several MB): in this case (lru_size >> prio) will be
      0 for normal scan priorities, as a result, if inactive file lru happens
      to be larger than active file lru, anonymous pages of a cgroup will
      never get evicted unless the system experiences severe memory pressure,
      even if there are gigabytes of unused anonymous memory there, which is
      unfair in respect to other cgroups, whose workloads might be page cache
      oriented.
      
      This patch attempts to fix this by elaborating the "enough inactive page
      cache" check: it makes it not only check that inactive lru size > active
      lru size, but also that we will scan something from the cgroup at the
      current scan priority.  If these conditions do not hold, we proceed to
      SCAN_FRACT as usual.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      316bda0e
    • David Rientjes's avatar
      mm, vmalloc: remove VM_VPAGES · 244d63ee
      David Rientjes authored
      VM_VPAGES is unnecessary, it's easier to check is_vmalloc_addr() when
      reading /proc/vmallocinfo.
      
      [akpm@linux-foundation.org: remove VM_VPAGES reference via kvfree()]
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      244d63ee
    • Geliang Tang's avatar
      mm, thp: use list_first_entry_or_null() · 14669347
      Geliang Tang authored
      Simplify the code with list_first_entry_or_null().
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      14669347
    • Jerome Marchand's avatar
      mm, procfs: breakdown RSS for anon, shmem and file in /proc/pid/status · 8cee852e
      Jerome Marchand authored
      There are several shortcomings with the accounting of shared memory
      (SysV shm, shared anonymous mapping, mapping of a tmpfs file).  The
      values in /proc/<pid>/status and <...>/statm don't allow to distinguish
      between shmem memory and a shared mapping to a regular file, even though
      theirs implication on memory usage are quite different: during reclaim,
      file mapping can be dropped or written back on disk, while shmem needs a
      place in swap.
      
      Also, to distinguish the memory occupied by anonymous and file mappings,
      one has to read the /proc/pid/statm file, which has a field for the file
      mappings (again, including shmem) and total memory occupied by these
      mappings (i.e.  equivalent to VmRSS in the <...>/status file.  Getting
      the value for anonymous mappings only is thus not exactly user-friendly
      (the statm file is intended to be rather efficiently machine-readable).
      
      To address both of these shortcomings, this patch adds a breakdown of
      VmRSS in /proc/<pid>/status via new fields RssAnon, RssFile and
      RssShmem, making use of the previous preparatory patch.  These fields
      tell the user the memory occupied by private anonymous pages, mapped
      regular files and shmem, respectively.  Other existing fields in /status
      and /statm files are left without change.  The /statm file can be
      extended in the future, if there's a need for that.
      
      Example (part of) /proc/pid/status output including the new Rss* fields:
      
      VmPeak:  2001008 kB
      VmSize:  2001004 kB
      VmLck:         0 kB
      VmPin:         0 kB
      VmHWM:      5108 kB
      VmRSS:      5108 kB
      RssAnon:              92 kB
      RssFile:            1324 kB
      RssShmem:           3692 kB
      VmData:      192 kB
      VmStk:       136 kB
      VmExe:         4 kB
      VmLib:      1784 kB
      VmPTE:      3928 kB
      VmPMD:        20 kB
      VmSwap:        0 kB
      HugetlbPages:          0 kB
      
      [vbabka@suse.cz: forward-porting, tweak changelog]
      Signed-off-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8cee852e
    • Jerome Marchand's avatar
      mm, shmem: add internal shmem resident memory accounting · eca56ff9
      Jerome Marchand authored
      Currently looking at /proc/<pid>/status or statm, there is no way to
      distinguish shmem pages from pages mapped to a regular file (shmem pages
      are mapped to /dev/zero), even though their implication in actual memory
      use is quite different.
      
      The internal accounting currently counts shmem pages together with
      regular files.  As a preparation to extend the userspace interfaces,
      this patch adds MM_SHMEMPAGES counter to mm_rss_stat to account for
      shmem pages separately from MM_FILEPAGES.  The next patch will expose it
      to userspace - this patch doesn't change the exported values yet, by
      adding up MM_SHMEMPAGES to MM_FILEPAGES at places where MM_FILEPAGES was
      used before.  The only user-visible change after this patch is the OOM
      killer message that separates the reported "shmem-rss" from "file-rss".
      
      [vbabka@suse.cz: forward-porting, tweak changelog]
      Signed-off-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eca56ff9
    • Vlastimil Babka's avatar
      mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings · 48131e03
      Vlastimil Babka authored
      Following the previous patch, further reduction of /proc/pid/smaps cost
      is possible for private writable shmem mappings with unpopulated areas
      where the page walk invokes the .pte_hole function.  We can use radix
      tree iterator for each such area instead of calling find_get_entry() in
      a loop.  This is possible at the extra maintenance cost of introducing
      another shmem function shmem_partial_swap_usage().
      
      To demonstrate the diference, I have measured this on a process that
      creates a private writable 2GB mapping of a partially swapped out
      /dev/shm/file (which cannot employ the optimizations from the prvious
      patch) and doesn't populate it at all.  I time how long does it take to
      cat /proc/pid/smaps of this process 100 times.
      
      Before this patch:
      
      real    0m3.831s
      user    0m0.180s
      sys     0m3.212s
      
      After this patch:
      
      real    0m1.176s
      user    0m0.180s
      sys     0m0.684s
      
      The time is similar to the case where a radix tree iterator is employed
      on the whole mapping.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      48131e03
    • Vlastimil Babka's avatar
      mm, proc: reduce cost of /proc/pid/smaps for shmem mappings · 6a15a370
      Vlastimil Babka authored
      The previous patch has improved swap accounting for shmem mapping, which
      however made /proc/pid/smaps more expensive for shmem mappings, as we
      consult the radix tree for each pte_none entry, so the overal complexity
      is O(n*log(n)).
      
      We can reduce this significantly for mappings that cannot contain COWed
      pages, because then we can either use the statistics tha shmem object
      itself tracks (if the mapping contains the whole object, or the swap
      usage of the whole object is zero), or use the radix tree iterator,
      which is much more effective than repeated find_get_entry() calls.
      
      This patch therefore introduces a function shmem_swap_usage(vma) and
      makes /proc/pid/smaps use it when possible.  Only for writable private
      mappings of shmem objects (i.e.  tmpfs files) with the shmem object
      itself (partially) swapped outwe have to resort to the find_get_entry()
      approach.
      
      Hopefully such mappings are relatively uncommon.
      
      To demonstrate the diference, I have measured this on a process that
      creates a 2GB mapping and dirties single pages with a stride of 2MB, and
      time how long does it take to cat /proc/pid/smaps of this process 100
      times.
      
      Private writable mapping of a /dev/shm/file (the most complex case):
      
      real    0m3.831s
      user    0m0.180s
      sys     0m3.212s
      
      Shared mapping of an almost full mapping of a partially swapped /dev/shm/file
      (which needs to employ the radix tree iterator).
      
      real    0m1.351s
      user    0m0.096s
      sys     0m0.768s
      
      Same, but with /dev/shm/file not swapped (so no radix tree walk needed)
      
      real    0m0.935s
      user    0m0.128s
      sys     0m0.344s
      
      Private anonymous mapping:
      
      real    0m0.949s
      user    0m0.116s
      sys     0m0.348s
      
      The cost is now much closer to the private anonymous mapping case, unless
      the shmem mapping is private and writable.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6a15a370
    • Vlastimil Babka's avatar
      mm, proc: account for shmem swap in /proc/pid/smaps · c261e7d9
      Vlastimil Babka authored
      Currently, /proc/pid/smaps will always show "Swap: 0 kB" for
      shmem-backed mappings, even if the mapped portion does contain pages
      that were swapped out.  This is because unlike private anonymous
      mappings, shmem does not change pte to swap entry, but pte_none when
      swapping the page out.  In the smaps page walk, such page thus looks
      like it was never faulted in.
      
      This patch changes smaps_pte_entry() to determine the swap status for
      such pte_none entries for shmem mappings, similarly to how
      mincore_page() does it.  Swapped out shmem pages are thus accounted for.
      For private mappings of tmpfs files that COWed some of the pages, swaped
      out status of the original shmem pages is naturally ignored.  If some of
      the private copies was also swapped out, they are accounted via their
      page table swap entries, so the resulting reported swap usage is then a
      sum of both swapped out private copies, and swapped out shmem pages that
      were not COWed.  No double accounting can thus happen.
      
      The accounting is arguably still not as precise as for private anonymous
      mappings, since now we will count also pages that the process in
      question never accessed, but another process populated them and then let
      them become swapped out.  I believe it is still less confusing and
      subtle than not showing any swap usage by shmem mappings at all.
      Swapped out counter might of interest of users who would like to prevent
      from future swapins during performance critical operation and pre-fault
      them at their convenience.  Especially for larger swapped out regions
      the cost of swapin is much higher than a fresh page allocation.  So a
      differentiation between pte_none vs.  swapped out is important for those
      usecases.
      
      One downside of this patch is that it makes /proc/pid/smaps more
      expensive for shmem mappings, as we consult the radix tree for each
      pte_none entry, so the overal complexity is O(n*log(n)).  I have
      measured this on a process that creates a 2GB mapping and dirties single
      pages with a stride of 2MB, and time how long does it take to cat
      /proc/pid/smaps of this process 100 times.
      
      Private anonymous mapping:
      
      real    0m0.949s
      user    0m0.116s
      sys     0m0.348s
      
      Mapping of a /dev/shm/file:
      
      real    0m3.831s
      user    0m0.180s
      sys     0m3.212s
      
      The difference is rather substantial, so the next patch will reduce the
      cost for shared or read-only mappings.
      
      In a less controlled experiment, I've gathered pids of processes on my
      desktop that have either '/dev/shm/*' or 'SYSV*' in smaps.  This
      included the Chrome browser and some KDE processes.  Again, I've run cat
      /proc/pid/smaps on each 100 times.
      
      Before this patch:
      
      real    0m9.050s
      user    0m0.518s
      sys     0m8.066s
      
      After this patch:
      
      real    0m9.221s
      user    0m0.541s
      sys     0m8.187s
      
      This suggests low impact on average systems.
      
      Note that this patch doesn't attempt to adjust the SwapPss field for
      shmem mappings, which would need extra work to determine who else could
      have the pages mapped.  Thus the value stays zero except for COWed
      swapped out pages in a shmem mapping, which are accounted as usual.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c261e7d9
    • Vlastimil Babka's avatar
      mm, documentation: clarify /proc/pid/status VmSwap limitations for shmem · bf9683d6
      Vlastimil Babka authored
      This series is based on Jerome Marchand's [1] so let me quote the first
      paragraph from there:
      
      There are several shortcomings with the accounting of shared memory
      (sysV shm, shared anonymous mapping, mapping to a tmpfs file).  The
      values in /proc/<pid>/status and statm don't allow to distinguish
      between shmem memory and a shared mapping to a regular file, even though
      their implications on memory usage are quite different: at reclaim, file
      mapping can be dropped or written back on disk while shmem needs a place
      in swap.  As for shmem pages that are swapped-out or in swap cache, they
      aren't accounted at all.
      
      The original motivation for myself is that a customer found (IMHO
      rightfully) confusing that e.g.  top output for process swap usage is
      unreliable with respect to swapped out shmem pages, which are not
      accounted for.
      
      The fundamental difference between private anonymous and shmem pages is
      that the latter has PTE's converted to pte_none, and not swapents.  As
      such, they are not accounted to the number of swapents visible e.g.  in
      /proc/pid/status VmSwap row.  It might be theoretically possible to use
      swapents when swapping out shmem (without extra cost, as one has to
      change all mappers anyway), and on swap in only convert the swapent for
      the faulting process, leaving swapents in other processes until they
      also fault (so again no extra cost).  But I don't know how many
      assumptions this would break, and it would be too disruptive change for
      a relatively small benefit.
      
      Instead, my approach is to document the limitation of VmSwap, and
      provide means to determine the swap usage for shmem areas for those who
      are interested and willing to pay the price, using /proc/pid/smaps.
      Because outside of ipcs, I don't think it's possible to currently to
      determine the usage at all.  The previous patchset [1] did introduce new
      shmem-specific fields into smaps output, and functions to determine the
      values.  I take a simpler approach, noting that smaps output already has
      a "Swap: X kB" line, where currently X == 0 always for shmem areas.  I
      think we can just consider this a bug and provide the proper value by
      consulting the radix tree, as e.g.  mincore_page() does.  In the patch
      changelog I explain why this is also not perfect (and cannot be without
      swapents), but still arguably much better than showing a 0.
      
      The last two patches are adapted from Jerome's patchset and provide a
      VmRSS breakdown to RssAnon, RssFile and RssShm in /proc/pid/status.
      Hugh noted that this is a welcome addition, and I agree that it might
      help e.g.  debugging process memory usage at albeit non-zero, but still
      rather low cost of extra per-mm counter and some page flag checks.
      
      [1] http://lwn.net/Articles/611966/
      
      This patch (of 6):
      
      The documentation for /proc/pid/status does not mention that the value
      of VmSwap counts only swapped out anonymous private pages, and not
      swapped out pages of the underlying shmem objects (for shmem mappings).
      This is not obvious, so document this limitation.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bf9683d6
    • Yaowei Bai's avatar
      mm/mmzone.c: memmap_valid_within() can be boolean · 5b80287a
      Yaowei Bai authored
      Make memmap_valid_within return bool due to this particular function
      only using either one or zero as its return value.
      
      No functional change.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b80287a
    • Geliang Tang's avatar
      mm/vmalloc.c: use list_{next,first}_entry · 6219c2a2
      Geliang Tang authored
      To make the intention clearer, use list_{next,first}_entry instead of
      list_entry.
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6219c2a2
    • Michal Hocko's avatar
      mm/page_alloc.c: do not loop over ALLOC_NO_WATERMARKS without triggering reclaim · 33d53103
      Michal Hocko authored
      __alloc_pages_slowpath is looping over ALLOC_NO_WATERMARKS requests if
      __GFP_NOFAIL is requested.  This is fragile because we are basically
      relying on somebody else to make the reclaim (be it the direct reclaim
      or OOM killer) for us.  The caller might be holding resources (e.g.
      locks) which block other other reclaimers from making any progress for
      example.  Remove the retry loop and rely on __alloc_pages_slowpath to
      invoke all allowed reclaim steps and retry logic.
      
      We have to be careful about __GFP_NOFAIL allocations from the
      PF_MEMALLOC context even though this is a very bad idea to begin with
      because no progress can be gurateed at all.  We shouldn't break the
      __GFP_NOFAIL semantic here though.  It could be argued that this is
      essentially GFP_NOWAIT context which we do not support but PF_MEMALLOC
      is much harder to check for existing users because they might happen
      deep down the code path performed much later after setting the flag so
      we cannot really rule out there is no kernel path triggering this
      combination.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      33d53103
    • Michal Hocko's avatar
      mm/page_alloc.c: get rid of __alloc_pages_high_priority() · fde82aaa
      Michal Hocko authored
      __alloc_pages_high_priority doesn't do anything special other than it
      calls get_page_from_freelist and loops around GFP_NOFAIL allocation
      until it succeeds.  It would be better if the first part was done in
      __alloc_pages_slowpath where we modify the zonelist because this would
      be easier to read and understand.  Opencoding the function into its only
      caller allows to simplify it a bit as well.
      
      This patch doesn't introduce any functional changes.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fde82aaa
    • Yaowei Bai's avatar
      mm/zonelist: enumerate zonelists array index · c00eb15a
      Yaowei Bai authored
      Hardcoding index to zonelists array in gfp_zonelist() is not a good
      idea, let's enumerate it to improve readability.
      
      No functional change.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix CONFIG_NUMA=n build]
      [n-horiguchi@ah.jp.nec.com: fix warning in comparing enumerator]
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c00eb15a
    • Yaowei Bai's avatar
      include/linux/mmzone.h: remove unused is_unevictable_lru() · 06640290
      Yaowei Bai authored
      Since commit a0b8cab3 ("mm: remove lru parameter from
      __pagevec_lru_add and remove parts of pagevec API") there's no
      user of this function anymore, so remove it.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      06640290
    • Yaowei Bai's avatar
      mm/memblock.c: memblock_is_memory()/reserved() can be boolean · b4ad0c7e
      Yaowei Bai authored
      Make memblock_is_memory() and memblock_is_reserved return bool to
      improve readability due to these particular functions only using either
      one or zero as their return value.
      
      No functional change.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4ad0c7e
    • Yaowei Bai's avatar
      include/linux/hugetlb.h: is_file_hugepages() can be boolean · 719ff321
      Yaowei Bai authored
      Make is_file_hugepages() return bool to improve readability due to this
      particular function only using either one or zero as its return value.
      
      This patch also removed the if condition to make is_file_hugepages
      return directly.
      
      No functional change.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      719ff321
    • yalin wang's avatar
      mm: change mm_vmscan_lru_shrink_inactive() proto types · ba5e9579
      yalin wang authored
      Move node_id zone_idx shrink flags into trace function, so thay we don't
      need caculate these args if the trace is disabled, and will make this
      function have less arguments.
      Signed-off-by: default avataryalin wang <yalin.wang2010@gmail.com>
      Reviewed-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ba5e9579
    • Joonsoo Kim's avatar
      mm/cma: always check which page caused allocation failure · 8ef5849f
      Joonsoo Kim authored
      Now, we have tracepoint in test_pages_isolated() to notify pfn which
      cannot be isolated.  But, in alloc_contig_range(), some error path
      doesn't call test_pages_isolated() so it's still hard to know exact pfn
      that causes allocation failure.
      
      This patch change this situation by calling test_pages_isolated() in
      almost error path.  In allocation failure case, some overhead is added
      by this change, but, allocation failure is really rare event so it would
      not matter.
      
      In fatal signal pending case, we don't call test_pages_isolated()
      because this failure is intentional one.
      
      There was a bogus outer_start problem due to unchecked buddy order and
      this patch also fix it.  Before this patch, it didn't matter, because
      end result is same thing.  But, after this patch, tracepoint will report
      failed pfn so it should be accurate.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8ef5849f
    • Joonsoo Kim's avatar
      mm/page_isolation.c: add new tracepoint, test_pages_isolated · 0f0848e5
      Joonsoo Kim authored
      cma allocation should be guranteeded to succeed.  But sometimes it can
      fail in the current implementation.  To track down the problem, we need
      to know which page is problematic and this new tracepoint will report
      it.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f0848e5
    • Joonsoo Kim's avatar
      mm/page_isolation.c: return last tested pfn rather than failure indicator · fea85cff
      Joonsoo Kim authored
      This is preparation step to report test failed pfn in new tracepoint to
      analyze cma allocation failure problem.  There is no functional change
      in this patch.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fea85cff
    • Nathan Zimmer's avatar
      mm/mempolicy.c: convert the shared_policy lock to a rwlock · 4a8c7bb5
      Nathan Zimmer authored
      When running the SPECint_rate gcc on some very large boxes it was
      noticed that the system was spending lots of time in
      mpol_shared_policy_lookup().  The gamess benchmark can also show it and
      is what I mostly used to chase down the issue since the setup for that I
      found to be easier.
      
      To be clear the binaries were on tmpfs because of disk I/O requirements.
      We then used text replication to avoid icache misses and having all the
      copies banging on the memory where the instruction code resides.  This
      results in us hitting a bottleneck in mpol_shared_policy_lookup() since
      lookup is serialised by the shared_policy lock.
      
      I have only reproduced this on very large (3k+ cores) boxes.  The
      problem starts showing up at just a few hundred ranks getting worse
      until it threatens to livelock once it gets large enough.  For example
      on the gamess benchmark at 128 ranks this area consumes only ~1% of
      time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is over
      90%.
      
      To alleviate the contention in this area I converted the spinlock to an
      rwlock.  This allows a large number of lookups to happen simultaneously.
      The results were quite good reducing this consumtion at max ranks to
      around 2%.
      
      [akpm@linux-foundation.org: tidy up code comments]
      Signed-off-by: default avatarNathan Zimmer <nzimmer@sgi.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Nadia Yvette Chambers <nyc@holomorphy.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4a8c7bb5
    • Chen Gang's avatar
      mm: add PHYS_PFN, use it in __phys_to_pfn() · 8f235d1a
      Chen Gang authored
      __phys_to_pfn and __pfn_to_phys are symmetric, PHYS_PFN and PFN_PHYS are
      semmetric:
      
       - y = (phys_addr_t)x << PAGE_SHIFT
      
       - y >> PAGE_SHIFT = (phys_add_t)x
      
       - (unsigned long)(y >> PAGE_SHIFT) = x
      
      [akpm@linux-foundation.org: use macro arg name `x']
      [arnd@arndb.de: include linux/pfn.h for PHYS_PFN definition]
      Signed-off-by: default avatarChen Gang <gang.chen.5i5j@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8f235d1a
    • yalin wang's avatar
      mm/vmscan.c: change trace_mm_vmscan_writepage() proto type · 3aa23851
      yalin wang authored
      Move trace_reclaim_flags() into trace function, so that we don't need
      caculate these flags if the trace is disabled.
      Signed-off-by: default avataryalin wang <yalin.wang2010@gmail.com>
      Reviewed-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3aa23851
    • Chen Gang's avatar
      mm/mmap.c: remove redundant local variables for may_expand_vm() · 0b57d6ba
      Chen Gang authored
      Simplify may_expand_vm().
      
      [akpm@linux-foundation.org: further simplification, per Naoya Horiguchi]
      Signed-off-by: default avatarChen Gang <gang.chen.5i5j@gmail.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0b57d6ba
    • Alexey Klimov's avatar
      mm/mlock.c: drop unneeded initialization in munlock_vma_pages_range() · ab7a5af7
      Alexey Klimov authored
      Before usage page pointer initialized by NULL is reinitialized by
      follow_page_mask().  Drop useless init of page pointer in the beginning
      of loop.
      Signed-off-by: default avatarAlexey Klimov <klimov.linux@gmail.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ab7a5af7
    • Vladimir Davydov's avatar
      kmemcg: account certain kmem allocations to memcg · 5d097056
      Vladimir Davydov authored
      Mark those kmem allocations that are known to be easily triggered from
      userspace as __GFP_ACCOUNT/SLAB_ACCOUNT, which makes them accounted to
      memcg.  For the list, see below:
      
       - threadinfo
       - task_struct
       - task_delay_info
       - pid
       - cred
       - mm_struct
       - vm_area_struct and vm_region (nommu)
       - anon_vma and anon_vma_chain
       - signal_struct
       - sighand_struct
       - fs_struct
       - files_struct
       - fdtable and fdtable->full_fds_bits
       - dentry and external_name
       - inode for all filesystems. This is the most tedious part, because
         most filesystems overwrite the alloc_inode method.
      
      The list is far from complete, so feel free to add more objects.
      Nevertheless, it should be close to "account everything" approach and
      keep most workloads within bounds.  Malevolent users will be able to
      breach the limit, but this was possible even with the former "account
      everything" approach (simply because it did not account everything in
      fact).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5d097056
    • Vladimir Davydov's avatar
      vmalloc: allow to account vmalloc to memcg · 37f08dda
      Vladimir Davydov authored
      Make vmalloc family functions allocate vmalloc area pages with
      alloc_kmem_pages so that if __GFP_ACCOUNT is set they will be accounted
      to memcg.  This is needed, at least, to account alloc_fdmem allocations.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      37f08dda
    • Vladimir Davydov's avatar
      slab: add SLAB_ACCOUNT flag · 230e9fc2
      Vladimir Davydov authored
      Currently, if we want to account all objects of a particular kmem cache,
      we have to pass __GFP_ACCOUNT to each kmem_cache_alloc call, which is
      inconvenient.  This patch introduces SLAB_ACCOUNT flag which if passed
      to kmem_cache_create will force accounting for every allocation from
      this cache even if __GFP_ACCOUNT is not passed.
      
      This patch does not make any of the existing caches use this flag - it
      will be done later in the series.
      
      Note, a cache with SLAB_ACCOUNT cannot be merged with a cache w/o
      SLAB_ACCOUNT, because merged caches share the same kmem_cache struct and
      hence cannot have different sets of SLAB_* flags.  Thus using this flag
      will probably reduce the number of merged slabs even if kmem accounting
      is not used (only compiled in).
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Suggested-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      230e9fc2
    • Vladimir Davydov's avatar
      memcg: only account kmem allocations marked as __GFP_ACCOUNT · a9bb7e62
      Vladimir Davydov authored
      Black-list kmem accounting policy (aka __GFP_NOACCOUNT) turned out to be
      fragile and difficult to maintain, because there seem to be many more
      allocations that should not be accounted than those that should be.
      Besides, false accounting an allocation might result in much worse
      consequences than not accounting at all, namely increased memory
      consumption due to pinned dead kmem caches.
      
      So this patch switches kmem accounting to the white-policy: now only
      those kmem allocations that are marked as __GFP_ACCOUNT are accounted to
      memcg.  Currently, no kmem allocations are marked like this.  The
      following patches will mark several kmem allocations that are known to
      be easily triggered from userspace and therefore should be accounted to
      memcg.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a9bb7e62
    • Vladimir Davydov's avatar
      Revert "gfp: add __GFP_NOACCOUNT" · 20b5c303
      Vladimir Davydov authored
      This reverts commit 8f4fc071 ("gfp: add __GFP_NOACCOUNT").
      
      Black-list kmem accounting policy (aka __GFP_NOACCOUNT) turned out to be
      fragile and difficult to maintain, because there seem to be many more
      allocations that should not be accounted than those that should be.
      Besides, false accounting an allocation might result in much worse
      consequences than not accounting at all, namely increased memory
      consumption due to pinned dead kmem caches.
      
      So it was decided to switch to the white-list policy.  This patch
      reverts bits introducing the black-list policy.  The white-list policy
      will be introduced later in the series.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20b5c303
    • Vladimir Davydov's avatar
      Revert "kernfs: do not account ino_ida allocations to memcg" · b2a209ff
      Vladimir Davydov authored
      Currently, all kmem allocations (namely every kmem_cache_alloc, kmalloc,
      alloc_kmem_pages call) are accounted to memory cgroup automatically.
      Callers have to explicitly opt out if they don't want/need accounting
      for some reason.  Such a design decision leads to several problems:
      
       - kmalloc users are highly sensitive to failures, many of them
         implicitly rely on the fact that kmalloc never fails, while memcg
         makes failures quite plausible.
      
       - A lot of objects are shared among different containers by design.
         Accounting such objects to one of containers is just unfair.
         Moreover, it might lead to pinning a dead memcg along with its kmem
         caches, which aren't tiny, which might result in noticeable increase
         in memory consumption for no apparent reason in the long run.
      
       - There are tons of short-lived objects. Accounting them to memcg will
         only result in slight noise and won't change the overall picture, but
         we still have to pay accounting overhead.
      
      For more info, see
      
       - http://lkml.kernel.org/r/20151105144002.GB15111%40dhcp22.suse.cz
       - http://lkml.kernel.org/r/20151106090555.GK29259@esperanza
      
      Therefore this patchset switches to the white list policy.  Now kmalloc
      users have to explicitly opt in by passing __GFP_ACCOUNT flag.
      
      Currently, the list of accounted objects is quite limited and only
      includes those allocations that (1) are known to be easily triggered
      from userspace and (2) can fail gracefully (for the full list see patch
      no.  6) and it still misses many object types.  However, accounting only
      those objects should be a satisfactory approximation of the behavior we
      used to have for most sane workloads.
      
      This patch (of 6):
      
      Revert 499611ed ("kernfs: do not account ino_ida allocations
      to memcg").
      
      Black-list kmem accounting policy (aka __GFP_NOACCOUNT) turned out to be
      fragile and difficult to maintain, because there seem to be many more
      allocations that should not be accounted than those that should be.
      Besides, false accounting an allocation might result in much worse
      consequences than not accounting at all, namely increased memory
      consumption due to pinned dead kmem caches.
      
      So it was decided to switch to the white-list policy.  This patch reverts
      bits introducing the black-list policy.  The white-list policy will be
      introduced later in the series.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b2a209ff
    • Geliang Tang's avatar
      mm/slab.c: add a helper function get_first_slab · 7aa0d227
      Geliang Tang authored
      Add a new helper function get_first_slab() that get the first slab from
      a kmem_cache_node.
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7aa0d227
    • Geliang Tang's avatar
      mm/slab.c: use list_for_each_entry in cache_flusharray · 73c0219d
      Geliang Tang authored
      Simplify the code with list_for_each_entry().
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      73c0219d
    • Geliang Tang's avatar
      mm/slab.c use list_first_entry_or_null() · d8ad47d8
      Geliang Tang authored
      Simplify the code with list_first_entry_or_null().
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d8ad47d8