1. 10 Aug, 2010 40 commits
    • KOSAKI Motohiro's avatar
      vmscan: simplify shrink_inactive_list() · e247dbce
      KOSAKI Motohiro authored
      Now, max_scan of shrink_inactive_list() is always passed less than
      SWAP_CLUSTER_MAX.  then, we can remove scanning pages loop in it.  This
      patch also help stack diet.
      
      detail
       - remove "while (nr_scanned < max_scan)" loop
       - remove nr_freed (now, we use nr_reclaimed directly)
       - remove nr_scan (now, we use nr_scanned directly)
       - rename max_scan to nr_to_scan
       - pass nr_to_scan into isolate_pages() directly instead
         using SWAP_CLUSTER_MAX
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e247dbce
    • KOSAKI Motohiro's avatar
      vmscan: kill prev_priority completely · 25edde03
      KOSAKI Motohiro authored
      Since 2.6.28 zone->prev_priority is unused. Then it can be removed
      safely. It reduce stack usage slightly.
      
      Now I have to say that I'm sorry. 2 years ago, I thought prev_priority
      can be integrate again, it's useful. but four (or more) times trying
      haven't got good performance number. Thus I give up such approach.
      
      The rest of this changelog is notes on prev_priority and why it existed in
      the first place and why it might be not necessary any more. This information
      is based heavily on discussions between Andrew Morton, Rik van Riel and
      Kosaki Motohiro who is heavily quotes from.
      
      Historically prev_priority was important because it determined when the VM
      would start unmapping PTE pages. i.e. there are no balances of note within
      the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there
      is a potential risk of unnecessarily increasing minor faults as a large
      amount of read activity of use-once pages could push mapped pages to the
      end of the LRU and get unmapped.
      
      There is no proof this is still a problem but currently it is not considered
      to be. Active files are not deactivated if the active file list is smaller
      than the inactive list reducing the liklihood that file-mapped pages are
      being pushed off the LRU and referenced executable pages are kept on the
      active list to avoid them getting pushed out by read activity.
      
      Even if it is a problem, prev_priority prev_priority wouldn't works
      nowadays. First of all, current vmscan still a lot of UP centric code. it
      expose some weakness on some dozens CPUs machine. I think we need more and
      more improvement.
      
      The problem is, current vmscan mix up per-system-pressure, per-zone-pressure
      and per-task-pressure a bit. example, prev_priority try to boost priority to
      other concurrent priority. but if the another task have mempolicy restriction,
      it is unnecessary, but also makes wrong big latency and exceeding reclaim.
      per-task based priority + prev_priority adjustment make the emulation of
      per-system pressure. but it have two issue 1) too rough and brutal emulation
      2) we need per-zone pressure, not per-system.
      
      Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about
      2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.
      but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the
      system have higher memory pressure than priority==0 (1/4096*10,000 > 2).
      prev_priority can't solve such multithreads workload issue. In other word,
      prev_priority concept assume the sysmtem don't have lots threads."
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      25edde03
    • Mel Gorman's avatar
      vmscan: tracing: add a postprocessing script for reclaim-related ftrace events · b898cc70
      Mel Gorman authored
      Add a simple post-processing script for the reclaim-related trace events.
      It can be used to give an indication of how much traffic there is on the
      LRU lists and how severe latencies due to reclaim are.  Example output
      looks like the following
      
      Reclaim latencies expressed as order-latency_in_ms
      uname-3942             9-200.179000000004 9-98.7900000000373 9-99.8330000001006
      kswapd0-311            0-662.097999999998 0-2.79700000002049 \
      	0-149.100000000035 0-3295.73600000003 0-9806.31799999997 0-35528.833 \
      	0-10043.197 0-129740.979 0-3.50500000000466 0-3.54899999999907 \
      	0-9297.78999999992 0-3.48499999998603 0-3596.97999999998 0-3.92799999995623 \
      	0-3.35000000009313 0-16729.017 0-3.57799999997951 0-47435.0630000001 \
      	0-3.7819999998901 0-5864.06999999995 0-18635.334 0-10541.289 9-186011.565 \
      	9-3680.86300000001 9-1379.06499999994 9-958571.115 9-66215.474 \
      	9-6721.14699999988 9-1962.15299999993 9-10948061.125 9-2267.83199999994 \
      	9-47120.9029999999 9-427653.886 9-2.6359999999404 9-632.148999999976 \
      	9-476.753000000026 9-495.577000000048 9-8.45900000003166 9-6.6820000000298 \
      	9-1.30500000016764 9-251.746000000043 9-383.905000000028 9-80.1419999999925 \
      	9-281.160000000149 9-14.8780000000261 9-381.45299999998 9-512.07799999998 \
      	9-49.5519999999087 9-167.439000000013 9-183.820999999996 9-239.527999999933 \
      	9-19.9479999998584 9-148.747999999905 9-164.583000000101 9-16.9480000000913 \
      	9-192.376000000164 9-64.1010000000242 9-1.40800000005402 9-3.60800000000745 \
      	9-17.1359999999404 9-4.69500000006519 9-2.06400000001304 9-1582488.554 \
      	9-6244.19499999983 9-348153.812 9-2.0999999998603 9-0.987999999895692 \
      	0-32218.473 0-1.6140000000596 0-1.28100000019185 0-1.41300000017509 \
      	0-1.32299999985844 0-602.584000000032 0-1.34400000004098 0-1.6929999999702 \
      	1-22101.8190000001 9-174876.724 9-16.2420000000857 9-175.165999999736 \
      	9-15.8589999997057 9-0.604999999981374 9-3061.09000000032 9-479.277000000235 \
      	9-1.54499999992549 9-771.985000000335 9-4.88700000010431 9-15.0649999999441 \
      	9-0.879999999888241 9-252.01500000013 9-1381.03600000031 9-545.689999999944 \
      	9-3438.0129999998 9-3343.70099999988
      bench-stresshig-3942   9-7063.33900000004 9-129960.482 9-2062.27500000002 \
      	9-3845.59399999992 9-171.82799999998 9-16493.821 9-7615.23900000006 \
      	9-10217.848 9-983.138000000035 9-2698.39999999991 9-4016.1540000001 \
      	9-5522.37700000009 9-21630.429 \
      	9-15061.048 9-10327.953 9-542.69700000016 9-317.652000000002 \
      	9-8554.71699999995 9-1786.61599999992 9-1899.31499999994 9-2093.41899999999 \
      	9-4992.62400000007 9-942.648999999976 9-1923.98300000001 9-3.7980000001844 \
      	9-5.99899999983609 9-0.912000000011176 9-1603.67700000014 9-1.98300000000745 \
      	9-3.96500000008382 9-0.902999999932945 9-2802.72199999983 9-1078.24799999991 \
      	9-2155.82900000014 9-10.058999999892 9-1984.723 9-1687.97999999998 \
      	9-1136.05300000007 9-3183.61699999985 9-458.731000000145 9-6.48600000003353 \
      	9-1013.25200000009 9-8415.22799999989 9-10065.584 9-2076.79600000009 \
      	9-3792.65699999989 9-71.2010000001173 9-2560.96999999997 9-2260.68400000012 \
      	9-2862.65799999982 9-1255.81500000018 9-15.7440000001807 9-4.33499999996275 \
      	9-1446.63800000004 9-238.635000000009 9-60.1790000000037 9-4.38800000003539 \
      	9-639.567000000039 9-306.698000000091 9-31.4070000001229 9-74.997999999905 \
      	9-632.725999999791 9-1625.93200000003 9-931.266000000061 9-98.7749999999069 \
      	9-984.606999999844 9-225.638999999966 9-421.316000000108 9-653.744999999879 \
      	9-572.804000000004 9-769.158999999985 9-603.918000000063 9-4.28499999991618 \
      	9-626.21399999992 9-1721.25 9-0.854999999981374 9-572.39599999995 \
      	9-681.881999999983 9-1345.12599999993 9-363.666999999899 9-3823.31099999999 \
      	9-2991.28200000012 9-4.27099999994971 9-309.76500000013 9-3068.35700000008 \
      	9-788.25 9-3515.73999999999 9-2065.96100000013 9-286.719999999972 \
      	9-316.076000000117 9-344.151000000071 9-2.51000000000931 9-306.688000000082 \
      	9-1515.00099999993 9-336.528999999864 9-793.491999999853 9-457.348999999929 \
      	9-13620.155 9-119.933999999892 9-35.0670000000391 9-918.266999999993 \
      	9-828.569000000134 9-4863.81099999999 9-105.222000000067 9-894.23900000006 \
      	9-110.964999999851 9-0.662999999942258 9-12753.3150000002 9-12.6129999998957 \
      	9-13368.0899999999 9-12.4199999999255 9-1.00300000002608 9-1.41100000008009 \
      	9-10300.5290000001 9-16.502000000095 9-30.7949999999255 9-6283.0140000002 \
      	9-4320.53799999994 9-6826.27300000004 9-3.07299999985844 9-1497.26799999992 \
      	9-13.4040000000969 9-3.12999999988824 9-3.86100000003353 9-11.3539999998175 \
      	9-0.10799999977462 9-21.780999999959 9-209.695999999996 9-299.647000000114 \
      	9-6.01699999999255 9-20.8349999999627 9-22.5470000000205 9-5470.16800000006 \
      	9-7.60499999998137 9-0.821000000229105 9-1.56600000010803 9-14.1669999998994 \
      	9-0.209000000031665 9-1.82300000009127 9-1.70000000018626 9-19.9429999999702 \
      	9-124.266999999993 9-0.0389999998733401 9-6.71400000015274 9-16.7710000001825 \
      	9-31.0409999999683 9-0.516999999992549 9-115.888000000035 9-5.19900000002235 \
      	9-222.389999999898 9-11.2739999999758 9-80.9050000000279 9-8.14500000001863 \
      	9-4.44599999999627 9-0.218999999808148 9-0.715000000083819 9-0.233000000007451
      \
      	9-48.2630000000354 9-248.560999999987 9-374.96800000011 9-644.179000000004 \
      	9-0.835999999893829 9-79.0060000000522 9-128.447999999858 9-0.692000000039116 \
      	9-5.26500000013039 9-128.449000000022 9-2.04799999995157 9-12.0990000001621 \
      	9-8.39899999997579 9-10.3860000001732 9-11.9310000000987 9-53.4450000000652 \
      	9-0.46999999997206 9-2.96299999998882 9-17.9699999999721 9-0.776000000070781 \
      	9-25.2919999998994 9-33.1110000000335 9-0.434000000124797 9-0.641000000061467 \
      	9-0.505000000121072 9-1.12800000002608 9-149.222000000067 9-1.17599999997765 \
      	9-3247.33100000001 9-10.7439999999478 9-153.523000000045 9-1.38300000014715 \
      	9-794.762000000104 9-3.36199999996461 9-128.765999999829 9-181.543999999994 \
      	9-78149.8229999999 9-176.496999999974 9-89.9940000001807 9-9.12700000009499 \
      	9-250.827000000048 9-0.224999999860302 9-0.388999999966472 9-1.16700000036508 \
      	9-32.1740000001155 9-12.6800000001676 9-0.0720000001601875 9-0.274999999906868
      \
      	9-0.724000000394881 9-266.866000000387 9-45.5709999999963 9-4.54399999976158 \
      	9-8.27199999988079 9-4.38099999958649 9-0.512000000104308 9-0.0640000002458692
      \
      	9-5.20000000018626 9-0.0839999997988343 9-12.816000000108 9-0.503000000026077 \
      	9-0.507999999914318 9-6.23999999975786 9-3.35100000025705 9-18.8530000001192 \
      	9-25.2220000000671 9-68.2309999996796 9-98.9939999999478 9-0.441000000108033 \
      	9-4.24599999981001 9-261.702000000048 9-3.01599999982864 9-0.0749999997206032 \
      	9-0.0370000000111759 9-4.375 9-3.21800000034273 9-11.3960000001825 \
      	9-0.0540000000037253 9-0.286000000312924 9-0.865999999921769 \
      	9-0.294999999925494 9-6.45999999996275 9-4.31099999975413 9-128.248999999836 \
      	9-0.282999999821186 9-102.155000000261 9-0.0860000001266599 \
      	9-0.0540000000037253 9-0.935000000055879 9-0.0670000002719462 \
      	9-5.8640000000596 9-19.9860000000335 9-4.18699999991804 9-0.566000000108033 \
      	9-2.55099999997765 9-0.702000000048429 9-131.653999999631 9-0.638999999966472 \
      	9-14.3229999998584 9-183.398000000045 9-178.095999999903 9-3.22899999981746 \
      	9-7.31399999978021 9-22.2400000002235 9-11.7979999999516 9-108.10599999968 \
      	9-99.0159999998286 9-102.640999999829 9-38.414000000339
      Process                  Direct     Wokeup      Pages      Pages    Pages
      details                   Rclms     Kswapd    Scanned    Sync-IO ASync-IO
      cc1-30800                     0          1          0          0        0      wakeup-0=1
      cc1-24260                     0          1          0          0        0      wakeup-0=1
      cc1-24152                     0         12          0          0        0      wakeup-0=12
      cc1-8139                      0          1          0          0        0      wakeup-0=1
      cc1-4390                      0          1          0          0        0      wakeup-0=1
      cc1-4648                      0          7          0          0        0      wakeup-0=7
      cc1-4552                      0          3          0          0        0      wakeup-0=3
      dd-4550                       0         31          0          0        0      wakeup-0=31
      date-4898                     0          1          0          0        0      wakeup-0=1
      cc1-6549                      0          7          0          0        0      wakeup-0=7
      as-22202                      0         17          0          0        0      wakeup-0=17
      cc1-6495                      0          9          0          0        0      wakeup-0=9
      cc1-8299                      0          1          0          0        0      wakeup-0=1
      cc1-6009                      0          1          0          0        0      wakeup-0=1
      cc1-2574                      0          2          0          0        0      wakeup-0=2
      cc1-30568                     0          1          0          0        0      wakeup-0=1
      cc1-2679                      0          6          0          0        0      wakeup-0=6
      sh-13747                      0         12          0          0        0      wakeup-0=12
      cc1-22193                     0         18          0          0        0      wakeup-0=18
      cc1-30725                     0          2          0          0        0      wakeup-0=2
      as-4392                       0          2          0          0        0      wakeup-0=2
      cc1-28180                     0         14          0          0        0      wakeup-0=14
      cc1-13697                     0          2          0          0        0      wakeup-0=2
      cc1-22207                     0          8          0          0        0      wakeup-0=8
      cc1-15270                     0        179          0          0        0      wakeup-0=179
      cc1-22011                     0         82          0          0        0      wakeup-0=82
      cp-14682                      0          1          0          0        0      wakeup-0=1
      as-11926                      0          2          0          0        0      wakeup-0=2
      cc1-6016                      0          5          0          0        0      wakeup-0=5
      make-18554                    0         13          0          0        0      wakeup-0=13
      cc1-8292                      0         12          0          0        0      wakeup-0=12
      make-24381                    0          1          0          0        0      wakeup-1=1
      date-18681                    0         33          0          0        0      wakeup-0=33
      cc1-32276                     0          1          0          0        0      wakeup-0=1
      timestamp-outpu-2809          0        253          0          0        0      wakeup-0=240 wakeup-1=13
      date-18624                    0          7          0          0        0      wakeup-0=7
      cc1-30960                     0          9          0          0        0      wakeup-0=9
      cc1-4014                      0          1          0          0        0      wakeup-0=1
      cc1-30706                     0         22          0          0        0      wakeup-0=22
      uname-3942                    4          1        306          0       17      direct-9=4       wakeup-9=1
      cc1-28207                     0          1          0          0        0      wakeup-0=1
      cc1-30563                     0          9          0          0        0      wakeup-0=9
      cc1-22214                     0         10          0          0        0      wakeup-0=10
      cc1-28221                     0         11          0          0        0      wakeup-0=11
      cc1-28123                     0          6          0          0        0      wakeup-0=6
      kswapd0-311                   0          7     357302          0    34233      wakeup-0=7
      cc1-5988                      0          7          0          0        0      wakeup-0=7
      as-30734                      0        161          0          0        0      wakeup-0=161
      cc1-22004                     0         45          0          0        0      wakeup-0=45
      date-4590                     0          4          0          0        0      wakeup-0=4
      cc1-15279                     0        213          0          0        0      wakeup-0=213
      date-30735                    0          1          0          0        0      wakeup-0=1
      cc1-30583                     0          4          0          0        0      wakeup-0=4
      cc1-32324                     0          2          0          0        0      wakeup-0=2
      cc1-23933                     0          3          0          0        0      wakeup-0=3
      cc1-22001                     0         36          0          0        0      wakeup-0=36
      bench-stresshig-3942        287        287      80186       6295    12196      direct-9=287       wakeup-9=287
      cc1-28170                     0          7          0          0        0      wakeup-0=7
      date-7932                     0         92          0          0        0      wakeup-0=92
      cc1-22222                     0          6          0          0        0      wakeup-0=6
      cc1-32334                     0         16          0          0        0      wakeup-0=16
      cc1-2690                      0          6          0          0        0      wakeup-0=6
      cc1-30733                     0          9          0          0        0      wakeup-0=9
      cc1-32298                     0          2          0          0        0      wakeup-0=2
      cc1-13743                     0         18          0          0        0      wakeup-0=18
      cc1-22186                     0          4          0          0        0      wakeup-0=4
      cc1-28214                     0         11          0          0        0      wakeup-0=11
      cc1-13735                     0          1          0          0        0      wakeup-0=1
      updatedb-8173                 0         18          0          0        0      wakeup-0=18
      cc1-13750                     0          3          0          0        0      wakeup-0=3
      cat-2808                      0          2          0          0        0      wakeup-0=2
      cc1-15277                     0        169          0          0        0      wakeup-0=169
      date-18317                    0          1          0          0        0      wakeup-0=1
      cc1-15274                     0        197          0          0        0      wakeup-0=197
      cc1-30732                     0          1          0          0        0      wakeup-0=1
      
      Kswapd                   Kswapd      Order      Pages      Pages    Pages
      Instance                Wakeups  Re-wakeup    Scanned    Sync-IO ASync-IO
      kswapd0-311                  91         24     357302          0    34233      wake-0=31 wake-1=1 wake-9=59       rewake-0=10 rewake-1=1 rewake-9=13
      
      Summary
      Direct reclaims:     		291
      Direct reclaim pages scanned:	437794
      Direct reclaim write sync I/O:	6295
      Direct reclaim write async I/O:	46446
      Wake kswapd requests:		2152
      Time stalled direct reclaim: 	519.163009000002 ms
      
      Kswapd wakeups:			91
      Kswapd pages scanned:		357302
      Kswapd reclaim write sync I/O:	0
      Kswapd reclaim write async I/O:	34233
      Time kswapd awake:		5282.749757 ms
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b898cc70
    • Mel Gorman's avatar
      vmscan: tracing: add trace event when a page is written · 755f0225
      Mel Gorman authored
      Add a trace event for when page reclaim queues a page for IO and records
      whether it is synchronous or asynchronous.  Excessive synchronous IO for a
      process can result in noticeable stalls during direct reclaim.  Excessive
      IO from page reclaim may indicate that the system is seriously under
      provisioned for the amount of dirty pages that exist.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      755f0225
    • Mel Gorman's avatar
      vmscan: tracing: add trace events for LRU page isolation · a8a94d15
      Mel Gorman authored
      Add an event for when pages are isolated en-masse from the LRU lists.
      This event augments the information available on LRU traffic and can be
      used to evaluate lumpy reclaim.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8a94d15
    • Mel Gorman's avatar
      vmscan: tracing: add trace events for kswapd wakeup, sleeping and direct reclaim · 33906bc5
      Mel Gorman authored
      Add two trace events for kswapd waking up and going asleep for the
      purposes of tracking kswapd activity and two trace events for direct
      reclaim beginning and ending.  The information can be used to work out how
      much time a process or the system is spending on the reclamation of pages
      and in the case of direct reclaim, how many pages were reclaimed for that
      process.  High frequency triggering of these events could point to memory
      pressure problems.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      33906bc5
    • KOSAKI Motohiro's avatar
      vmscan: recalculate lru_pages on each priority · c6a8a8c5
      KOSAKI Motohiro authored
      shrink_zones() need relatively long time and lru_pages can change
      dramatically during shrink_zones().  So lru_pages should be recalculated
      for each priority.
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c6a8a8c5
    • KOSAKI Motohiro's avatar
      vmscan: zone_reclaim don't call disable_swap_token() · b00d3ea7
      KOSAKI Motohiro authored
      Swap token don't works when zone reclaim is enabled since it was born.
      Because __zone_reclaim() always call disable_swap_token() unconditionally.
      
      This kill swap token feature completely.  As far as I know, nobody want to
      that.  Remove it.
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b00d3ea7
    • Jan Kara's avatar
      mm: implement writeback livelock avoidance using page tagging · f446daae
      Jan Kara authored
      We try to avoid livelocks of writeback when some steadily creates dirty
      pages in a mapping we are writing out.  For memory-cleaning writeback,
      using nr_to_write works reasonably well but we cannot really use it for
      data integrity writeback.  This patch tries to solve the problem.
      
      The idea is simple: Tag all pages that should be written back with a
      special tag (TOWRITE) in the radix tree.  This can be done rather quickly
      and thus livelocks should not happen in practice.  Then we start doing the
      hard work of locking pages and sending them to disk only for those pages
      that have TOWRITE tag set.
      
      Note: Adding new radix tree tag grows radix tree node from 288 to 296
      bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs.
      However, the number of slab/slub items per page remains the same (13 and 7
      respectively).
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f446daae
    • Jan Kara's avatar
      radix-tree: omplement function radix_tree_range_tag_if_tagged · ebf8aa44
      Jan Kara authored
      Implement function for setting one tag if another tag is set for each item
      in given range.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebf8aa44
    • Andrea Arcangeli's avatar
      rmap: add anon_vma bug checks · 44ab57a0
      Andrea Arcangeli authored
      Verify the refcounting doesn't go wrong, and resurrect the check in
      __page_check_anon_rmap as in old anon-vma code.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      44ab57a0
    • Andrea Arcangeli's avatar
      rmap: resurrect page_address_in_vma anon_vma check · 21d0d443
      Andrea Arcangeli authored
      With root anon-vma it's trivial to keep doing the usual check as in
      old-anon-vma code.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      21d0d443
    • Andrea Arcangeli's avatar
      rmap: always use anon_vma root pointer · 288468c3
      Andrea Arcangeli authored
      Always use anon_vma->root pointer instead of anon_vma_chain.prev.
      
      Also optimize the map-paths, if a mapping is already established no need
      to overwrite it with root anon-vma list, we can keep the more finegrined
      anon-vma and skip the overwrite: see the PageAnon check in !exclusive
      case.  This is also the optimization that hidden the ksm bug as this tends
      to make ksm_might_need_to_copy skip the copy, but only the proper fix to
      ksm_might_need_to_copy guarantees not triggering the ksm bug unless ksm is
      in use.  this is an optimization only...
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      [kamezawa.hiroyu@jp.fujitsu.com: fix false positive BUG_ON in __page_set_anon_rmap]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      288468c3
    • Andrea Arcangeli's avatar
      ksm: fix ksm swapin time optimization · ba6f0ff3
      Andrea Arcangeli authored
      The new anon-vma code, was suboptimal and it lead to erratic invocation of
      ksm_does_need_to_copy.  That leads to host hangs or guest vnc lockup, or
      weird behavior.  It's unclear why ksm_does_need_to_copy is unstable but
      the point is that when KSM is not in use, ksm_does_need_to_copy must never
      run or we bounce pages for no good reason.  I suspect the same hangs will
      happen with KVM swaps.  But this at least fixes the regression in the
      new-anon-vma code and it only let KSM bugs triggers when KSM is in use.
      
      The code in do_swap_page likely doesn't cope well with a not-swapcache,
      especially the memcg code.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Izik Eidus <ieidus@yahoo.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ba6f0ff3
    • Andrea Arcangeli's avatar
      rmap: always add new vmas at the end · 26ba0cb6
      Andrea Arcangeli authored
      Make sure to always add new VMAs at the end of the list.  This is
      important so rmap_walk does not miss a VMA that was created during the
      rmap_walk.
      
      The old code got this right most of the time due to luck, but was buggy
      when anon_vma_prepare reused a mergeable anon_vma.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      26ba0cb6
    • Andrea Arcangeli's avatar
      mmap: remove unnecessary lock from __vma_link · 5e549e98
      Andrea Arcangeli authored
      There's no anon-vma related mangling happening inside __vma_link anymore
      so no need of anon_vma locking there.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e549e98
    • Shaohua Li's avatar
      shmem: reduce pagefault lock contention · ff36b801
      Shaohua Li authored
      I'm running a shmem pagefault test case (see attached file) under a 64 CPU
      system.  Profile shows shmem_inode_info->lock is heavily contented and
      100% CPUs time are trying to get the lock.  In the pagefault (no swap)
      case, shmem_getpage gets the lock twice, the last one is avoidable if we
      prealloc a page so we could reduce one time of locking.  This is what
      below patch does.
      
      The result of the test case:
      2.6.35-rc3: ~20s
      2.6.35-rc3 + patch: ~12s
      so this is 40% improvement.
      
      One might argue if we could have better locking for shmem.  But even shmem
      is lockless, the pagefault will soon have pagecache lock heavily contented
      because shmem must add new page to pagecache.  So before we have better
      locking for pagecache, improving shmem locking doesn't have too much
      improvement.  I did a similar pagefault test against a ramfs file, the
      test result is ~10.5s.
      
      [akpm@linux-foundation.org: fix comment, clean up code layout, elimintate code duplication]
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: "Zhang, Yanmin" <yanmin.zhang@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff36b801
    • Tim Chen's avatar
      tmpfs: make tmpfs scalable with percpu_counter for used blocks · 7e496299
      Tim Chen authored
      The current implementation of tmpfs is not scalable.  We found that
      stat_lock is contended by multiple threads when we need to get a new page,
      leading to useless spinning inside this spin lock.
      
      This patch makes use of the percpu_counter library to maintain local count
      of used blocks to speed up getting and returning of pages.  So the
      acquisition of stat_lock is unnecessary for getting and returning blocks,
      improving the performance of tmpfs on system with large number of cpus.
      On a 4 socket 32 core NHM-EX system, we saw improvement of 270%.
      
      The implementation below has a slight chance of race between threads
      causing a slight overshoot of the maximum configured blocks.  However, any
      overshoot is small, and is bounded by the number of cpus.  This happens
      when the number of used blocks is slightly below the maximum configured
      blocks when a thread checks the used block count, and another thread
      allocates the last block before the current thread does.  This should not
      be a problem for tmpfs, as the overshoot is most likely to be a few blocks
      and bounded.  If a strict limit is really desired, then configured the max
      blocks to be the limit less the number of cpus in system.
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7e496299
    • Tim Chen's avatar
      tmpfs: add accurate compare function to percpu_counter library · 27f5e0f6
      Tim Chen authored
      Add percpu_counter_compare that allows for a quick but accurate comparison
      of percpu_counter with a given value.
      
      A rough count is provided by the count field in percpu_counter structure,
      without accounting for the other values stored in individual cpu counters.
      
      The actual count is a sum of count and the cpu counters.  However, count
      field is never different from the actual value by a factor of
      batch*num_online_cpu.  We do not need to get actual count for comparison
      if count is different from the given value by this factor and allows for
      quick comparison without summing up all the per cpu counters.
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      27f5e0f6
    • Andi Kleen's avatar
      gcc-4.6: mm: fix unused but set warnings · 4e60c86b
      Andi Kleen authored
      No real bugs, just some dead code and some fixups.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4e60c86b
    • Andi Kleen's avatar
      gcc-4.6: pagemap: avoid unused-but-set variable · 627295e4
      Andi Kleen authored
      Avoid quite a lot of warnings in header files in a gcc 4.6 -Wall builds
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      627295e4
    • Nick Piggin's avatar
      avr32: invoke oom-killer from page fault · 67a8a20f
      Nick Piggin authored
      As explained in commit 1c0fe6e3 ("mm: invoke oom-killer from page
      fault") , we want to call the architecture independent oom killer when
      getting an unexplained OOM from handle_mm_fault, rather than simply
      killing current.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarHaavard Skinnemoen <hskinnemoen@atmel.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      67a8a20f
    • KOSAKI Motohiro's avatar
      mempolicy: reduce stack size of migrate_pages() · 596d7cfa
      KOSAKI Motohiro authored
      migrate_pages() is using >500 bytes stack. Reduce it.
      
         mm/mempolicy.c: In function 'sys_migrate_pages':
         mm/mempolicy.c:1344: warning: the frame size of 528 bytes is larger than 512 bytes
      
      [akpm@linux-foundation.org: don't play with a might-be-NULL pointer]
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      596d7cfa
    • Lee Schermerhorn's avatar
      topology: alternate fix for ia64 tiger_defconfig build breakage · 25106000
      Lee Schermerhorn authored
      Define stubs for the numa_*_id() generic percpu related functions for
      non-NUMA configurations in <asm-generic/topology.h> where the other
      non-numa stubs live.
      
      Fixes ia64 !NUMA build breakage -- e.g., tiger_defconfig
      
      Back out now unneeded '#ifndef CONFIG_NUMA' guards from ia64 smpboot.c
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Tested-by: default avatarTony Luck <tony.luck@intel.com>
      Acked-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      25106000
    • Alexander Nevenchannyy's avatar
      mmzone.h: remove dead prototype · b645bd12
      Alexander Nevenchannyy authored
      get_zone_counts() was dropped from kernel tree, see:
      http://www.mail-archive.com/mm-commits@vger.kernel.org/msg07313.html but
      its prototype remains.
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b645bd12
    • Minchan Kim's avatar
      mm: use for_each_online_cpu() in vmstat · 31f961a8
      Minchan Kim authored
      The sum_vm_events passes cpumask for for_each_cpu().  But it's useless
      since we have for_each_online_cpu.  Althougth it's tirival overhead, it's
      not good about coding consistency.
      
      Let's use for_each_online_cpu instead of for_each_cpu with cpumask
      argument.
      Signed-off-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      31f961a8
    • David Rientjes's avatar
      oom: fold __out_of_memory into out_of_memory · 0aad4b31
      David Rientjes authored
      __out_of_memory() only has a single caller, so fold it into
      out_of_memory() and add a comment about locking for its call to
      oom_kill_process().
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0aad4b31
    • David Rientjes's avatar
      oom: remove constraint argument from select_bad_process and __out_of_memory · f4420032
      David Rientjes authored
      select_bad_process() and __out_of_memory() doe not need their enum
      oom_constraint arguments: it's possible to pass a NULL nodemask if
      constraint == CONSTRAINT_MEMORY_POLICY in the caller, out_of_memory().
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f4420032
    • Minchan Kim's avatar
      mm: rename try_set_zone_oom() to try_set_zonelist_oom() · ff321fea
      Minchan Kim authored
      We have been used naming try_set_zone_oom and clear_zonelist_oom.
      The role of functions is to lock of zonelist for preventing parallel
      OOM. So clear_zonelist_oom makes sense but try_set_zone_oome is rather
      awkward and unmatched with clear_zonelist_oom.
      
      Let's change it with try_set_zonelist_oom.
      Signed-off-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff321fea
    • David Rientjes's avatar
      oom: remove unnecessary code and cleanup · b940fd70
      David Rientjes authored
      Remove the redundancy in __oom_kill_task() since:
      
       - init can never be passed to this function: it will never be PF_EXITING
         or selectable from select_bad_process(), and
      
       - it will never be passed a task from oom_kill_task() without an ->mm
         and we're unconcerned about detachment from exiting tasks, there's no
         reason to protect them against SIGKILL or access to memory reserves.
      
      Also moves the kernel log message to a higher level since the verbosity is
      not always emitted here; we need not print an error message if an exiting
      task is given a longer timeslice.
      
      __oom_kill_task() only has a single caller, so it can be merged into that
      function at the same time.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b940fd70
    • David Rientjes's avatar
      oom: move sysctl declarations to oom.h · 8e4228e1
      David Rientjes authored
      The three oom killer sysctl variables (sysctl_oom_dump_tasks,
      sysctl_oom_kill_allocating_task, and sysctl_panic_on_oom) are better
      declared in include/linux/oom.h rather than kernel/sysctl.c.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e4228e1
    • David Rientjes's avatar
      oom: remove special handling for pagefault ooms · e3658932
      David Rientjes authored
      It is possible to remove the special pagefault oom handler by simply oom
      locking all system zones and then calling directly into out_of_memory().
      
      All populated zones must have ZONE_OOM_LOCKED set, otherwise there is a
      parallel oom killing in progress that will lead to eventual memory freeing
      so it's not necessary to needlessly kill another task.  The context in
      which the pagefault is allocating memory is unknown to the oom killer, so
      this is done on a system-wide level.
      
      If a task has already been oom killed and hasn't fully exited yet, this
      will be a no-op since select_bad_process() recognizes tasks across the
      system with TIF_MEMDIE set.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e3658932
    • David Rientjes's avatar
      oom: extract panic helper function · 309ed882
      David Rientjes authored
      There are various points in the oom killer where the kernel must determine
      whether to panic or not.  It's better to extract this to a helper function
      to remove all the confusion as to its semantics.
      
      Also fix a call to dump_header() where tasklist_lock is not read- locked,
      as required.
      
      There's no functional change with this patch.
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      309ed882
    • David Rientjes's avatar
      oom: avoid oom killer for lowmem allocations · 03668b3c
      David Rientjes authored
      If memory has been depleted in lowmem zones even with the protection
      afforded to it by /proc/sys/vm/lowmem_reserve_ratio, it is unlikely that
      killing current users will help.  The memory is either reclaimable (or
      migratable) already, in which case we should not invoke the oom killer at
      all, or it is pinned by an application for I/O.  Killing such an
      application may leave the hardware in an unspecified state and there is no
      guarantee that it will be able to make a timely exit.
      
      Lowmem allocations are now failed in oom conditions when __GFP_NOFAIL is
      not used so that the task can perhaps recover or try again later.
      
      Previously, the heuristic provided some protection for those tasks with
      CAP_SYS_RAWIO, but this is no longer necessary since we will not be
      killing tasks for the purposes of ISA allocations.
      
      high_zoneidx is gfp_zone(gfp_flags), meaning that ZONE_NORMAL will be the
      default for all allocations that are not __GFP_DMA, __GFP_DMA32,
      __GFP_HIGHMEM, and __GFP_MOVABLE on kernels configured to support those
      flags.  Testing for high_zoneidx being less than ZONE_NORMAL will only
      return true for allocations that have either __GFP_DMA or __GFP_DMA32.
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03668b3c
    • David Rientjes's avatar
      oom: enable oom tasklist dump by default · ad915c43
      David Rientjes authored
      The oom killer tasklist dump, enabled with the oom_dump_tasks sysctl, is
      very helpful information in diagnosing why a user's task has been killed.
      It emits useful information such as each eligible thread's memory usage
      that can determine why the system is oom, so it should be enabled by
      default.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ad915c43
    • David Rientjes's avatar
      oom: select task from tasklist for mempolicy ooms · 6f48d0eb
      David Rientjes authored
      The oom killer presently kills current whenever there is no more memory
      free or reclaimable on its mempolicy's nodes.  There is no guarantee that
      current is a memory-hogging task or that killing it will free any
      substantial amount of memory, however.
      
      In such situations, it is better to scan the tasklist for nodes that are
      allowed to allocate on current's set of nodes and kill the task with the
      highest badness() score.  This ensures that the most memory-hogging task,
      or the one configured by the user with /proc/pid/oom_adj, is always
      selected in such scenarios.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6f48d0eb
    • David Rientjes's avatar
      oom: sacrifice child with highest badness score for parent · 5e9d834a
      David Rientjes authored
      When a task is chosen for oom kill, the oom killer first attempts to
      sacrifice a child not sharing its parent's memory instead.  Unfortunately,
      this often kills in a seemingly random fashion based on the ordering of
      the selected task's child list.  Additionally, it is not guaranteed at all
      to free a large amount of memory that we need to prevent additional oom
      killing in the very near future.
      
      Instead, we now only attempt to sacrifice the worst child not sharing its
      parent's memory, if one exists.  The worst child is indicated with the
      highest badness() score.  This serves two advantages: we kill a
      memory-hogging task more often, and we allow the configurable
      /proc/pid/oom_adj value to be considered as a factor in which child to
      kill.
      
      Reviewers may observe that the previous implementation would iterate
      through the children and attempt to kill each until one was successful and
      then the parent if none were found while the new code simply kills the
      most memory-hogging task or the parent.  Note that the only time
      oom_kill_task() fails, however, is when a child does not have an mm or has
      a /proc/pid/oom_adj of OOM_DISABLE.  badness() returns 0 for both cases,
      so the final oom_kill_task() will always succeed.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e9d834a
    • David Rientjes's avatar
      oom: filter tasks not sharing the same cpuset · 6cf86ac6
      David Rientjes authored
      Tasks that do not share the same set of allowed nodes with the task that
      triggered the oom should not be considered as candidates for oom kill.
      
      Tasks in other cpusets with a disjoint set of mems would be unfairly
      penalized otherwise because of oom conditions elsewhere; an extreme
      example could unfairly kill all other applications on the system if a
      single task in a user's cpuset sets itself to OOM_DISABLE and then uses
      more memory than allowed.
      
      Killing tasks outside of current's cpuset rarely would free memory for
      current anyway.  To use a sane heuristic, we must ensure that killing a
      task would likely free memory for current and avoid needlessly killing
      others at all costs just because their potential memory freeing is
      unknown.  It is better to kill current than another task needlessly.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarNick Piggin <npiggin@suse.de>
      Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6cf86ac6
    • David Rientjes's avatar
      oom: avoid sending exiting tasks a SIGKILL · 4358997a
      David Rientjes authored
      It's unnecessary to SIGKILL a task that is already PF_EXITING and can
      actually cause a NULL pointer dereference of the sighand if it has already
      been detached.  Instead, simply set TIF_MEMDIE so it has access to memory
      reserves and can quickly exit as the comment implies.
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4358997a
    • David Rientjes's avatar
      oom: give current access to memory reserves if it has been killed · 7b98c2e4
      David Rientjes authored
      It's possible to livelock the page allocator if a thread has mm->mmap_sem
      and fails to make forward progress because the oom killer selects another
      thread sharing the same ->mm to kill that cannot exit until the semaphore
      is dropped.
      
      The oom killer will not kill multiple tasks at the same time; each oom
      killed task must exit before another task may be killed.  Thus, if one
      thread is holding mm->mmap_sem and cannot allocate memory, all threads
      sharing the same ->mm are blocked from exiting as well.  In the oom kill
      case, that means the thread holding mm->mmap_sem will never free
      additional memory since it cannot get access to memory reserves and the
      thread that depends on it with access to memory reserves cannot exit
      because it cannot acquire the semaphore.  Thus, the page allocators
      livelocks.
      
      When the oom killer is called and current happens to have a pending
      SIGKILL, this patch automatically gives it access to memory reserves and
      returns.  Upon returning to the page allocator, its allocation will
      hopefully succeed so it can quickly exit and free its memory.  If not, the
      page allocator will fail the allocation if it is not __GFP_NOFAIL.
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7b98c2e4