1. 20 May, 2016 40 commits
    • Mel Gorman's avatar
      mm, page_alloc: reduce branches in zone_statistics · b9f00e14
      Mel Gorman authored
      zone_statistics has more branches than it really needs to take an
      unlikely GFP flag into account.  Reduce the number and annotate the
      unlikely flag.
      
      The performance difference on a page allocator microbenchmark is;
      
                                                   4.6.0-rc2                  4.6.0-rc2
                                            nocompound-v1r10           statbranch-v1r10
        Min      alloc-odr0-1               417.00 (  0.00%)           419.00 ( -0.48%)
        Min      alloc-odr0-2               308.00 (  0.00%)           305.00 (  0.97%)
        Min      alloc-odr0-4               253.00 (  0.00%)           250.00 (  1.19%)
        Min      alloc-odr0-8               221.00 (  0.00%)           219.00 (  0.90%)
        Min      alloc-odr0-16              205.00 (  0.00%)           203.00 (  0.98%)
        Min      alloc-odr0-32              199.00 (  0.00%)           195.00 (  2.01%)
        Min      alloc-odr0-64              193.00 (  0.00%)           191.00 (  1.04%)
        Min      alloc-odr0-128             191.00 (  0.00%)           189.00 (  1.05%)
        Min      alloc-odr0-256             200.00 (  0.00%)           198.00 (  1.00%)
        Min      alloc-odr0-512             212.00 (  0.00%)           210.00 (  0.94%)
        Min      alloc-odr0-1024            219.00 (  0.00%)           216.00 (  1.37%)
        Min      alloc-odr0-2048            225.00 (  0.00%)           221.00 (  1.78%)
        Min      alloc-odr0-4096            231.00 (  0.00%)           227.00 (  1.73%)
        Min      alloc-odr0-8192            234.00 (  0.00%)           232.00 (  0.85%)
        Min      alloc-odr0-16384           234.00 (  0.00%)           232.00 (  0.85%)
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b9f00e14
    • Mel Gorman's avatar
      mm, page_alloc: use new PageAnonHead helper in the free page fast path · 17514574
      Mel Gorman authored
      The PageAnon check always checks for compound_head but this is a
      relatively expensive check if the caller already knows the page is a
      head page.  This patch creates a helper and uses it in the page free
      path which only operates on head pages.
      
      With this patch and "Only check PageCompound for high-order pages", the
      performance difference on a page allocator microbenchmark is;
      
                                                   4.6.0-rc2                  4.6.0-rc2
                                                     vanilla           nocompound-v1r20
        Min      alloc-odr0-1               425.00 (  0.00%)           417.00 (  1.88%)
        Min      alloc-odr0-2               313.00 (  0.00%)           308.00 (  1.60%)
        Min      alloc-odr0-4               257.00 (  0.00%)           253.00 (  1.56%)
        Min      alloc-odr0-8               224.00 (  0.00%)           221.00 (  1.34%)
        Min      alloc-odr0-16              208.00 (  0.00%)           205.00 (  1.44%)
        Min      alloc-odr0-32              199.00 (  0.00%)           199.00 (  0.00%)
        Min      alloc-odr0-64              195.00 (  0.00%)           193.00 (  1.03%)
        Min      alloc-odr0-128             192.00 (  0.00%)           191.00 (  0.52%)
        Min      alloc-odr0-256             204.00 (  0.00%)           200.00 (  1.96%)
        Min      alloc-odr0-512             213.00 (  0.00%)           212.00 (  0.47%)
        Min      alloc-odr0-1024            219.00 (  0.00%)           219.00 (  0.00%)
        Min      alloc-odr0-2048            225.00 (  0.00%)           225.00 (  0.00%)
        Min      alloc-odr0-4096            230.00 (  0.00%)           231.00 ( -0.43%)
        Min      alloc-odr0-8192            235.00 (  0.00%)           234.00 (  0.43%)
        Min      alloc-odr0-16384           235.00 (  0.00%)           234.00 (  0.43%)
        Min      free-odr0-1                215.00 (  0.00%)           191.00 ( 11.16%)
        Min      free-odr0-2                152.00 (  0.00%)           136.00 ( 10.53%)
        Min      free-odr0-4                119.00 (  0.00%)           107.00 ( 10.08%)
        Min      free-odr0-8                106.00 (  0.00%)            96.00 (  9.43%)
        Min      free-odr0-16                97.00 (  0.00%)            87.00 ( 10.31%)
        Min      free-odr0-32                91.00 (  0.00%)            83.00 (  8.79%)
        Min      free-odr0-64                89.00 (  0.00%)            81.00 (  8.99%)
        Min      free-odr0-128               88.00 (  0.00%)            80.00 (  9.09%)
        Min      free-odr0-256              106.00 (  0.00%)            95.00 ( 10.38%)
        Min      free-odr0-512              116.00 (  0.00%)           111.00 (  4.31%)
        Min      free-odr0-1024             125.00 (  0.00%)           118.00 (  5.60%)
        Min      free-odr0-2048             133.00 (  0.00%)           126.00 (  5.26%)
        Min      free-odr0-4096             136.00 (  0.00%)           130.00 (  4.41%)
        Min      free-odr0-8192             138.00 (  0.00%)           130.00 (  5.80%)
        Min      free-odr0-16384            137.00 (  0.00%)           130.00 (  5.11%)
      
      There is a sizable boost to the free allocator performance.  While there
      is an apparent boost on the allocation side, it's likely a co-incidence
      or due to the patches slightly reducing cache footprint.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      17514574
    • Mel Gorman's avatar
      mm, page_alloc: only check PageCompound for high-order pages · d61f8590
      Mel Gorman authored
      Another year, another round of page allocator optimisations focusing
      this time on the alloc and free fast paths.  This should be of help to
      workloads that are allocator-intensive from kernel space where the cost
      of zeroing is not nceessraily incurred.
      
      The series is motivated by the observation that page alloc
      microbenchmarks on multiple machines regressed between 3.12.44 and 4.4.
      Second, there is discussions before LSF/MM considering the possibility
      of adding another page allocator which is potentially hazardous but a
      patch series improving performance is better than whining.
      
      After the series is applied, there are still hazards.  In the free
      paths, the debugging checking and page zone/pageblock lookups dominate
      but there was not an obvious solution to that.  In the alloc path, the
      major contributers are dealing with zonelists, new page preperation, the
      fair zone allocation and numerous statistic updates.  The fair zone
      allocator is removed by the per-node LRU series if that gets merged so
      it's nor a major concern at the moment.
      
      On normal userspace benchmarks, there is little impact as the zeroing
      cost is significant but it's visible
      
        aim9
                                       4.6.0-rc3             4.6.0-rc3
                                         vanilla         deferalloc-v3
        Min      page_test   828693.33 (  0.00%)   887060.00 (  7.04%)
        Min      brk_test   4847266.67 (  0.00%)  4966266.67 (  2.45%)
        Min      exec_test     1271.00 (  0.00%)     1275.67 (  0.37%)
        Min      fork_test    12371.75 (  0.00%)    12380.00 (  0.07%)
      
      The overall impact on a page allocator microbenchmark for a range of orders
      and number of pages allocated in a batch is
      
                                                  4.6.0-rc3                  4.6.0-rc3
                                                     vanilla            deferalloc-v3r7
        Min      alloc-odr0-1               428.00 (  0.00%)           316.00 ( 26.17%)
        Min      alloc-odr0-2               314.00 (  0.00%)           231.00 ( 26.43%)
        Min      alloc-odr0-4               256.00 (  0.00%)           192.00 ( 25.00%)
        Min      alloc-odr0-8               222.00 (  0.00%)           166.00 ( 25.23%)
        Min      alloc-odr0-16              207.00 (  0.00%)           154.00 ( 25.60%)
        Min      alloc-odr0-32              197.00 (  0.00%)           148.00 ( 24.87%)
        Min      alloc-odr0-64              193.00 (  0.00%)           144.00 ( 25.39%)
        Min      alloc-odr0-128             191.00 (  0.00%)           143.00 ( 25.13%)
        Min      alloc-odr0-256             203.00 (  0.00%)           153.00 ( 24.63%)
        Min      alloc-odr0-512             212.00 (  0.00%)           165.00 ( 22.17%)
        Min      alloc-odr0-1024            221.00 (  0.00%)           172.00 ( 22.17%)
        Min      alloc-odr0-2048            225.00 (  0.00%)           179.00 ( 20.44%)
        Min      alloc-odr0-4096            232.00 (  0.00%)           185.00 ( 20.26%)
        Min      alloc-odr0-8192            235.00 (  0.00%)           187.00 ( 20.43%)
        Min      alloc-odr0-16384           236.00 (  0.00%)           188.00 ( 20.34%)
        Min      alloc-odr1-1               519.00 (  0.00%)           450.00 ( 13.29%)
        Min      alloc-odr1-2               391.00 (  0.00%)           336.00 ( 14.07%)
        Min      alloc-odr1-4               313.00 (  0.00%)           268.00 ( 14.38%)
        Min      alloc-odr1-8               277.00 (  0.00%)           235.00 ( 15.16%)
        Min      alloc-odr1-16              256.00 (  0.00%)           218.00 ( 14.84%)
        Min      alloc-odr1-32              252.00 (  0.00%)           212.00 ( 15.87%)
        Min      alloc-odr1-64              244.00 (  0.00%)           206.00 ( 15.57%)
        Min      alloc-odr1-128             244.00 (  0.00%)           207.00 ( 15.16%)
        Min      alloc-odr1-256             243.00 (  0.00%)           207.00 ( 14.81%)
        Min      alloc-odr1-512             245.00 (  0.00%)           209.00 ( 14.69%)
        Min      alloc-odr1-1024            248.00 (  0.00%)           214.00 ( 13.71%)
        Min      alloc-odr1-2048            253.00 (  0.00%)           220.00 ( 13.04%)
        Min      alloc-odr1-4096            258.00 (  0.00%)           224.00 ( 13.18%)
        Min      alloc-odr1-8192            261.00 (  0.00%)           229.00 ( 12.26%)
        Min      alloc-odr2-1               560.00 (  0.00%)           753.00 (-34.46%)
        Min      alloc-odr2-2               424.00 (  0.00%)           351.00 ( 17.22%)
        Min      alloc-odr2-4               339.00 (  0.00%)           393.00 (-15.93%)
        Min      alloc-odr2-8               298.00 (  0.00%)           246.00 ( 17.45%)
        Min      alloc-odr2-16              276.00 (  0.00%)           227.00 ( 17.75%)
        Min      alloc-odr2-32              271.00 (  0.00%)           221.00 ( 18.45%)
        Min      alloc-odr2-64              264.00 (  0.00%)           217.00 ( 17.80%)
        Min      alloc-odr2-128             264.00 (  0.00%)           217.00 ( 17.80%)
        Min      alloc-odr2-256             264.00 (  0.00%)           218.00 ( 17.42%)
        Min      alloc-odr2-512             269.00 (  0.00%)           223.00 ( 17.10%)
        Min      alloc-odr2-1024            279.00 (  0.00%)           230.00 ( 17.56%)
        Min      alloc-odr2-2048            283.00 (  0.00%)           235.00 ( 16.96%)
        Min      alloc-odr2-4096            285.00 (  0.00%)           239.00 ( 16.14%)
        Min      alloc-odr3-1               629.00 (  0.00%)           505.00 ( 19.71%)
        Min      alloc-odr3-2               472.00 (  0.00%)           374.00 ( 20.76%)
        Min      alloc-odr3-4               383.00 (  0.00%)           301.00 ( 21.41%)
        Min      alloc-odr3-8               341.00 (  0.00%)           266.00 ( 21.99%)
        Min      alloc-odr3-16              316.00 (  0.00%)           248.00 ( 21.52%)
        Min      alloc-odr3-32              308.00 (  0.00%)           241.00 ( 21.75%)
        Min      alloc-odr3-64              305.00 (  0.00%)           241.00 ( 20.98%)
        Min      alloc-odr3-128             308.00 (  0.00%)           244.00 ( 20.78%)
        Min      alloc-odr3-256             317.00 (  0.00%)           249.00 ( 21.45%)
        Min      alloc-odr3-512             327.00 (  0.00%)           256.00 ( 21.71%)
        Min      alloc-odr3-1024            331.00 (  0.00%)           261.00 ( 21.15%)
        Min      alloc-odr3-2048            333.00 (  0.00%)           266.00 ( 20.12%)
        Min      alloc-odr4-1               767.00 (  0.00%)           572.00 ( 25.42%)
        Min      alloc-odr4-2               578.00 (  0.00%)           429.00 ( 25.78%)
        Min      alloc-odr4-4               474.00 (  0.00%)           346.00 ( 27.00%)
        Min      alloc-odr4-8               422.00 (  0.00%)           310.00 ( 26.54%)
        Min      alloc-odr4-16              399.00 (  0.00%)           295.00 ( 26.07%)
        Min      alloc-odr4-32              392.00 (  0.00%)           293.00 ( 25.26%)
        Min      alloc-odr4-64              394.00 (  0.00%)           293.00 ( 25.63%)
        Min      alloc-odr4-128             405.00 (  0.00%)           305.00 ( 24.69%)
        Min      alloc-odr4-256             417.00 (  0.00%)           319.00 ( 23.50%)
        Min      alloc-odr4-512             425.00 (  0.00%)           326.00 ( 23.29%)
        Min      alloc-odr4-1024            426.00 (  0.00%)           329.00 ( 22.77%)
        Min      free-odr0-1                216.00 (  0.00%)           178.00 ( 17.59%)
        Min      free-odr0-2                152.00 (  0.00%)           125.00 ( 17.76%)
        Min      free-odr0-4                120.00 (  0.00%)            99.00 ( 17.50%)
        Min      free-odr0-8                106.00 (  0.00%)            85.00 ( 19.81%)
        Min      free-odr0-16                97.00 (  0.00%)            80.00 ( 17.53%)
        Min      free-odr0-32                92.00 (  0.00%)            76.00 ( 17.39%)
        Min      free-odr0-64                89.00 (  0.00%)            74.00 ( 16.85%)
        Min      free-odr0-128               89.00 (  0.00%)            73.00 ( 17.98%)
        Min      free-odr0-256              107.00 (  0.00%)            90.00 ( 15.89%)
        Min      free-odr0-512              117.00 (  0.00%)           108.00 (  7.69%)
        Min      free-odr0-1024             125.00 (  0.00%)           118.00 (  5.60%)
        Min      free-odr0-2048             132.00 (  0.00%)           125.00 (  5.30%)
        Min      free-odr0-4096             135.00 (  0.00%)           130.00 (  3.70%)
        Min      free-odr0-8192             137.00 (  0.00%)           130.00 (  5.11%)
        Min      free-odr0-16384            137.00 (  0.00%)           131.00 (  4.38%)
        Min      free-odr1-1                318.00 (  0.00%)           289.00 (  9.12%)
        Min      free-odr1-2                228.00 (  0.00%)           207.00 (  9.21%)
        Min      free-odr1-4                182.00 (  0.00%)           165.00 (  9.34%)
        Min      free-odr1-8                163.00 (  0.00%)           146.00 ( 10.43%)
        Min      free-odr1-16               151.00 (  0.00%)           135.00 ( 10.60%)
        Min      free-odr1-32               146.00 (  0.00%)           129.00 ( 11.64%)
        Min      free-odr1-64               145.00 (  0.00%)           130.00 ( 10.34%)
        Min      free-odr1-128              148.00 (  0.00%)           134.00 (  9.46%)
        Min      free-odr1-256              148.00 (  0.00%)           137.00 (  7.43%)
        Min      free-odr1-512              151.00 (  0.00%)           140.00 (  7.28%)
        Min      free-odr1-1024             154.00 (  0.00%)           143.00 (  7.14%)
        Min      free-odr1-2048             156.00 (  0.00%)           144.00 (  7.69%)
        Min      free-odr1-4096             156.00 (  0.00%)           142.00 (  8.97%)
        Min      free-odr1-8192             156.00 (  0.00%)           140.00 ( 10.26%)
        Min      free-odr2-1                361.00 (  0.00%)           457.00 (-26.59%)
        Min      free-odr2-2                258.00 (  0.00%)           224.00 ( 13.18%)
        Min      free-odr2-4                208.00 (  0.00%)           223.00 ( -7.21%)
        Min      free-odr2-8                185.00 (  0.00%)           160.00 ( 13.51%)
        Min      free-odr2-16               173.00 (  0.00%)           149.00 ( 13.87%)
        Min      free-odr2-32               166.00 (  0.00%)           145.00 ( 12.65%)
        Min      free-odr2-64               166.00 (  0.00%)           146.00 ( 12.05%)
        Min      free-odr2-128              169.00 (  0.00%)           148.00 ( 12.43%)
        Min      free-odr2-256              170.00 (  0.00%)           152.00 ( 10.59%)
        Min      free-odr2-512              177.00 (  0.00%)           156.00 ( 11.86%)
        Min      free-odr2-1024             182.00 (  0.00%)           162.00 ( 10.99%)
        Min      free-odr2-2048             181.00 (  0.00%)           160.00 ( 11.60%)
        Min      free-odr2-4096             180.00 (  0.00%)           159.00 ( 11.67%)
        Min      free-odr3-1                431.00 (  0.00%)           367.00 ( 14.85%)
        Min      free-odr3-2                306.00 (  0.00%)           259.00 ( 15.36%)
        Min      free-odr3-4                249.00 (  0.00%)           208.00 ( 16.47%)
        Min      free-odr3-8                224.00 (  0.00%)           186.00 ( 16.96%)
        Min      free-odr3-16               208.00 (  0.00%)           176.00 ( 15.38%)
        Min      free-odr3-32               206.00 (  0.00%)           174.00 ( 15.53%)
        Min      free-odr3-64               210.00 (  0.00%)           178.00 ( 15.24%)
        Min      free-odr3-128              215.00 (  0.00%)           182.00 ( 15.35%)
        Min      free-odr3-256              224.00 (  0.00%)           189.00 ( 15.62%)
        Min      free-odr3-512              232.00 (  0.00%)           195.00 ( 15.95%)
        Min      free-odr3-1024             230.00 (  0.00%)           195.00 ( 15.22%)
        Min      free-odr3-2048             229.00 (  0.00%)           193.00 ( 15.72%)
        Min      free-odr4-1                561.00 (  0.00%)           439.00 ( 21.75%)
        Min      free-odr4-2                418.00 (  0.00%)           318.00 ( 23.92%)
        Min      free-odr4-4                339.00 (  0.00%)           269.00 ( 20.65%)
        Min      free-odr4-8                299.00 (  0.00%)           239.00 ( 20.07%)
        Min      free-odr4-16               289.00 (  0.00%)           234.00 ( 19.03%)
        Min      free-odr4-32               291.00 (  0.00%)           235.00 ( 19.24%)
        Min      free-odr4-64               298.00 (  0.00%)           238.00 ( 20.13%)
        Min      free-odr4-128              308.00 (  0.00%)           251.00 ( 18.51%)
        Min      free-odr4-256              321.00 (  0.00%)           267.00 ( 16.82%)
        Min      free-odr4-512              327.00 (  0.00%)           269.00 ( 17.74%)
        Min      free-odr4-1024             326.00 (  0.00%)           271.00 ( 16.87%)
        Min      total-odr0-1               644.00 (  0.00%)           494.00 ( 23.29%)
        Min      total-odr0-2               466.00 (  0.00%)           356.00 ( 23.61%)
        Min      total-odr0-4               376.00 (  0.00%)           291.00 ( 22.61%)
        Min      total-odr0-8               328.00 (  0.00%)           251.00 ( 23.48%)
        Min      total-odr0-16              304.00 (  0.00%)           234.00 ( 23.03%)
        Min      total-odr0-32              289.00 (  0.00%)           224.00 ( 22.49%)
        Min      total-odr0-64              282.00 (  0.00%)           218.00 ( 22.70%)
        Min      total-odr0-128             280.00 (  0.00%)           216.00 ( 22.86%)
        Min      total-odr0-256             310.00 (  0.00%)           243.00 ( 21.61%)
        Min      total-odr0-512             329.00 (  0.00%)           273.00 ( 17.02%)
        Min      total-odr0-1024            346.00 (  0.00%)           290.00 ( 16.18%)
        Min      total-odr0-2048            357.00 (  0.00%)           304.00 ( 14.85%)
        Min      total-odr0-4096            367.00 (  0.00%)           315.00 ( 14.17%)
        Min      total-odr0-8192            372.00 (  0.00%)           317.00 ( 14.78%)
        Min      total-odr0-16384           373.00 (  0.00%)           319.00 ( 14.48%)
        Min      total-odr1-1               838.00 (  0.00%)           739.00 ( 11.81%)
        Min      total-odr1-2               619.00 (  0.00%)           543.00 ( 12.28%)
        Min      total-odr1-4               495.00 (  0.00%)           433.00 ( 12.53%)
        Min      total-odr1-8               440.00 (  0.00%)           382.00 ( 13.18%)
        Min      total-odr1-16              407.00 (  0.00%)           353.00 ( 13.27%)
        Min      total-odr1-32              398.00 (  0.00%)           341.00 ( 14.32%)
        Min      total-odr1-64              389.00 (  0.00%)           336.00 ( 13.62%)
        Min      total-odr1-128             392.00 (  0.00%)           341.00 ( 13.01%)
        Min      total-odr1-256             391.00 (  0.00%)           344.00 ( 12.02%)
        Min      total-odr1-512             396.00 (  0.00%)           349.00 ( 11.87%)
        Min      total-odr1-1024            402.00 (  0.00%)           357.00 ( 11.19%)
        Min      total-odr1-2048            409.00 (  0.00%)           364.00 ( 11.00%)
        Min      total-odr1-4096            414.00 (  0.00%)           366.00 ( 11.59%)
        Min      total-odr1-8192            417.00 (  0.00%)           369.00 ( 11.51%)
        Min      total-odr2-1               921.00 (  0.00%)          1210.00 (-31.38%)
        Min      total-odr2-2               682.00 (  0.00%)           576.00 ( 15.54%)
        Min      total-odr2-4               547.00 (  0.00%)           616.00 (-12.61%)
        Min      total-odr2-8               483.00 (  0.00%)           406.00 ( 15.94%)
        Min      total-odr2-16              449.00 (  0.00%)           376.00 ( 16.26%)
        Min      total-odr2-32              437.00 (  0.00%)           366.00 ( 16.25%)
        Min      total-odr2-64              431.00 (  0.00%)           363.00 ( 15.78%)
        Min      total-odr2-128             433.00 (  0.00%)           365.00 ( 15.70%)
        Min      total-odr2-256             434.00 (  0.00%)           371.00 ( 14.52%)
        Min      total-odr2-512             446.00 (  0.00%)           379.00 ( 15.02%)
        Min      total-odr2-1024            461.00 (  0.00%)           392.00 ( 14.97%)
        Min      total-odr2-2048            464.00 (  0.00%)           395.00 ( 14.87%)
        Min      total-odr2-4096            465.00 (  0.00%)           398.00 ( 14.41%)
        Min      total-odr3-1              1060.00 (  0.00%)           872.00 ( 17.74%)
        Min      total-odr3-2               778.00 (  0.00%)           633.00 ( 18.64%)
        Min      total-odr3-4               632.00 (  0.00%)           510.00 ( 19.30%)
        Min      total-odr3-8               565.00 (  0.00%)           452.00 ( 20.00%)
        Min      total-odr3-16              524.00 (  0.00%)           424.00 ( 19.08%)
        Min      total-odr3-32              514.00 (  0.00%)           415.00 ( 19.26%)
        Min      total-odr3-64              515.00 (  0.00%)           419.00 ( 18.64%)
        Min      total-odr3-128             523.00 (  0.00%)           426.00 ( 18.55%)
        Min      total-odr3-256             541.00 (  0.00%)           438.00 ( 19.04%)
        Min      total-odr3-512             559.00 (  0.00%)           451.00 ( 19.32%)
        Min      total-odr3-1024            561.00 (  0.00%)           456.00 ( 18.72%)
        Min      total-odr3-2048            562.00 (  0.00%)           459.00 ( 18.33%)
        Min      total-odr4-1              1328.00 (  0.00%)          1011.00 ( 23.87%)
        Min      total-odr4-2               997.00 (  0.00%)           747.00 ( 25.08%)
        Min      total-odr4-4               813.00 (  0.00%)           615.00 ( 24.35%)
        Min      total-odr4-8               721.00 (  0.00%)           550.00 ( 23.72%)
        Min      total-odr4-16              689.00 (  0.00%)           529.00 ( 23.22%)
        Min      total-odr4-32              683.00 (  0.00%)           528.00 ( 22.69%)
        Min      total-odr4-64              692.00 (  0.00%)           531.00 ( 23.27%)
        Min      total-odr4-128             713.00 (  0.00%)           556.00 ( 22.02%)
        Min      total-odr4-256             738.00 (  0.00%)           586.00 ( 20.60%)
        Min      total-odr4-512             753.00 (  0.00%)           595.00 ( 20.98%)
        Min      total-odr4-1024            752.00 (  0.00%)           600.00 ( 20.21%)
      
      This patch (of 27):
      
      order-0 pages by definition cannot be compound so avoid the check in the
      fast path for those pages.
      
      [akpm@linux-foundation.org: use unlikely(order) in free_pages_prepare(), per Vlastimil]
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d61f8590
    • Michal Hocko's avatar
      mm, oom_reaper: clear TIF_MEMDIE for all tasks queued for oom_reaper · 449d777d
      Michal Hocko authored
      Right now the oom reaper will clear TIF_MEMDIE only for tasks which were
      successfully reaped.  This is the safest option because we know that
      such an oom victim would only block forward progress of the oom killer
      without a good reason because it is highly unlikely it would release
      much more memory.  Basically most of its memory has been already torn
      down.
      
      We can relax this assumption to catch more corner cases though.
      
      The first obvious one is when the oom victim clears its mm and gets
      stuck later on.  oom_reaper would back of on find_lock_task_mm returning
      NULL.  We can safely try to clear TIF_MEMDIE in this case because such a
      task would be ignored by the oom killer anyway.  The flag would be
      cleared by that time already most of the time anyway.
      
      The less obvious one is when the oom reaper fails due to mmap_sem
      contention.  Even if we clear TIF_MEMDIE for this task then it is not
      very likely that we would select another task too easily because we
      haven't reaped the last victim and so it would be still the #1
      candidate.  There is a rare race condition possible when the current
      victim terminates before the next select_bad_process but considering
      that oom_reap_task had retried several times before giving up then this
      sounds like a borderline thing.
      
      After this patch we should have a guarantee that the OOM killer will not
      be block for unbounded amount of time for most cases.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Daniel Vetter <daniel.vetter@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      449d777d
    • Michal Hocko's avatar
      oom, oom_reaper: try to reap tasks which skip regular OOM killer path · 3ef22dff
      Michal Hocko authored
      If either the current task is already killed or PF_EXITING or a selected
      task is PF_EXITING then the oom killer is suppressed and so is the oom
      reaper.  This patch adds try_oom_reaper which checks the given task and
      queues it for the oom reaper if that is safe to be done meaning that the
      task doesn't share the mm with an alive process.
      
      This might help to release the memory pressure while the task tries to
      exit.
      
      [akpm@linux-foundation.org: fix nommu build]
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Daniel Vetter <daniel.vetter@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3ef22dff
    • Michal Hocko's avatar
      mm, oom: move GFP_NOFS check to out_of_memory · 3da88fb3
      Michal Hocko authored
      __alloc_pages_may_oom is the central place to decide when the
      out_of_memory should be invoked.  This is a good approach for most
      checks there because they are page allocator specific and the allocation
      fails right after for all of them.
      
      The notable exception is GFP_NOFS context which is faking
      did_some_progress and keep the page allocator looping even though there
      couldn't have been any progress from the OOM killer.  This patch doesn't
      change this behavior because we are not ready to allow those allocation
      requests to fail yet (and maybe we will face the reality that we will
      never manage to safely fail these request).  Instead __GFP_FS check is
      moved down to out_of_memory and prevent from OOM victim selection there.
      There are two reasons for that
      
      	- OOM notifiers might release some memory even from this context
      	  as none of the registered notifier seems to be FS related
      	- this might help a dying thread to get an access to memory
                reserves and move on which will make the behavior more
                consistent with the case when the task gets killed from a
                different context.
      
      Keep a comment in __alloc_pages_may_oom to make sure we do not forget
      how GFP_NOFS is special and that we really want to do something about
      it.
      
      Note to the current oom_notifier users:
      
      The observable difference for you is that oom notifiers cannot depend on
      any fs locks because we could deadlock.  Not that this would be allowed
      today because that would just lockup machine in most of the cases and
      ruling out the OOM killer along the way.  Another difference is that
      callbacks might be invoked sooner now because GFP_NOFS is a weaker
      reclaim context and so there could be reclaimable memory which is just
      not reachable now.  That would require GFP_NOFS only loads which are
      really rare and more importantly the observable result would be dropping
      of reconstructible object and potential performance drop which is not
      such a big deal when we are struggling to fulfill other important
      allocation requests.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Daniel Vetter <daniel.vetter@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3da88fb3
    • Vitaly Kuznetsov's avatar
      memory_hotplug: introduce memhp_default_state= command line parameter · 86dd995d
      Vitaly Kuznetsov authored
      CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE specifies the default value for the
      memory hotplug onlining policy.  Add a command line parameter to make it
      possible to override the default.  It may come handy for debug and
      testing purposes.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Lennart Poettering <lennart@poettering.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      86dd995d
    • Vitaly Kuznetsov's avatar
      memory_hotplug: introduce CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE · 8604d9e5
      Vitaly Kuznetsov authored
      This patchset continues the work I started with commit 31bc3858
      ("memory-hotplug: add automatic onlining policy for the newly added
      memory").
      
      Initially I was going to stop there and bring the policy setting logic
      to userspace.  I met two issues on this way:
      
       1) It is possible to have memory hotplugged at boot (e.g.  with QEMU).
          These blocks stay offlined if we turn the onlining policy on by
          userspace.
      
       2) My attempt to bring this policy setting to systemd failed, systemd
          maintainers suggest to change the default in kernel or ...  to use
          tmpfiles.d to alter the policy (which looks like a hack to me):
              https://github.com/systemd/systemd/pull/2938
      
      Here I suggest to add a config option to set the default value for the
      policy and a kernel command line parameter to make the override.
      
      This patch (of 2):
      
      Introduce config option to set the default value for memory hotplug
      onlining policy (/sys/devices/system/memory/auto_online_blocks).  The
      reason one would want to turn this option on are to have early onlining
      for hotpluggable memory available at boot and to not require any
      userspace actions to make memory hotplug work.
      
      [akpm@linux-foundation.org: tweak Kconfig text]
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Lennart Poettering <lennart@poettering.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8604d9e5
    • Hugh Dickins's avatar
      arch: fix has_transparent_hugepage() · fd8cfd30
      Hugh Dickins authored
      I've just discovered that the useful-sounding has_transparent_hugepage()
      is actually an architecture-dependent minefield: on some arches it only
      builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
      not, but on some of those (arm and arm64) it then gives the wrong
      answer; and on mips alone it's marked __init, which would crash if
      called later (but so far it has not been called later).
      
      Straighten this out: make it available to all configs, with a sensible
      default in asm-generic/pgtable.h, removing its definitions from those
      arches (arc, arm, arm64, sparc, tile) which are served by the default,
      adding #define has_transparent_hugepage has_transparent_hugepage to
      those (mips, powerpc, s390, x86) which need to override the default at
      runtime, and removing the __init from mips (but maybe that kind of code
      should be avoided after init: set a static variable the first time it's
      called).
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>		[arch/arc]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>	[arch/s390]
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fd8cfd30
    • Hugh Dickins's avatar
      huge pagecache: extend mremap pmd rmap lockout to files · 1d069b7d
      Hugh Dickins authored
      Whatever huge pagecache implementation we go with, file rmap locking
      must be added to anon rmap locking, when mremap's move_page_tables()
      finds a pmd_trans_huge pmd entry: a simple change, let's do it now.
      
      Factor out take_rmap_locks() and drop_rmap_locks() to handle the locking
      for make move_ptes() and move_page_tables(), and delete the
      VM_BUG_ON_VMA which rejected vm_file and required anon_vma.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1d069b7d
    • Hugh Dickins's avatar
      huge mm: move_huge_pmd does not need new_vma · bf8616d5
      Hugh Dickins authored
      Remove move_huge_pmd()'s redundant new_vma arg: all it was used for was
      a VM_NOHUGEPAGE check on new_vma flags, but the new_vma is cloned from
      the old vma, so a trans_huge_pmd in the new_vma will be as acceptable as
      it was in the old vma, alignment and size permitting.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bf8616d5
    • Hugh Dickins's avatar
      mm: /proc/sys/vm/stat_refresh to force vmstat update · 52b6f46b
      Hugh Dickins authored
      Provide /proc/sys/vm/stat_refresh to force an immediate update of
      per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
      before checking counts when testing.  Originally added to work around a
      bug which left counts stranded indefinitely on a cpu going idle (an
      inaccuracy magnified when small below-batch numbers represent "huge"
      amounts of memory), but I believe that bug is now fixed: nonetheless,
      this is still a useful knob.
      
      Its schedule_on_each_cpu() is probably too expensive just to fold into
      reading /proc/meminfo itself: give this mode 0600 to prevent abuse.
      Allow a write or a read to do the same: nothing to read, but "grep -h
      Shmem /proc/sys/vm/stat_refresh /proc/meminfo" is convenient.  Oh, and
      since global_page_state() itself is careful to disguise any underflow as
      0, hack in an "Invalid argument" and pr_warn() if a counter is negative
      after the refresh - this helped to fix a misaccounting of
      NR_ISOLATED_FILE in my migration code.
      
      But on recent kernels, I find that NR_ALLOC_BATCH and NR_PAGES_SCANNED
      often go negative some of the time.  I have not yet worked out why, but
      have no evidence that it's actually harmful.  Punt for the moment by
      just ignoring the anomaly on those.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      52b6f46b
    • Andres Lagar-Cavilla's avatar
      tmpfs: mem_cgroup charge fault to vm_mm not current mm · 9e18eb29
      Andres Lagar-Cavilla authored
      Although shmem_fault() has been careful to count a major fault to vm_mm,
      shmem_getpage_gfp() has been careless in charging a remote access fault
      to current->mm owner's memcg instead of to vma->vm_mm owner's memcg:
      that is inconsistent with all the mem_cgroup charging on remote access
      faults in mm/memory.c.
      
      Fix it by passing fault_mm along with fault_type to
      shmem_get_page_gfp(); but in that case, now knowing the right mm, it's
      better for it to handle the PGMAJFAULT updates itself.
      
      And let's keep this clutter out of most callers' way: change the common
      shmem_getpage() wrapper to hide fault_mm and fault_type as well as gfp.
      Signed-off-by: default avatarAndres Lagar-Cavilla <andreslc@google.com>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9e18eb29
    • Hugh Dickins's avatar
      tmpfs: preliminary minor tidyups · 75edd345
      Hugh Dickins authored
      Make a few cleanups in mm/shmem.c, before going on to complicate it.
      
      shmem_alloc_page() will become more complicated: we can't afford to to
      have that complication duplicated between a CONFIG_NUMA version and a
      !CONFIG_NUMA version, so rearrange the #ifdef'ery there to yield a
      single shmem_swapin() and a single shmem_alloc_page().
      
      Yes, it's a shame to inflict the horrid pseudo-vma on non-NUMA
      configurations, but eliminating it is a larger cleanup: I have an
      alloc_pages_mpol() patchset not yet ready - mpol handling is subtle and
      bug-prone, and changed yet again since my last version.
      
      Move __SetPageLocked, __SetPageSwapBacked from shmem_getpage_gfp() to
      shmem_alloc_page(): that SwapBacked flag will be useful in future, to
      help to distinguish different cases appropriately.
      
      And the SGP_DIRTY variant of SGP_CACHE is hard to understand and of
      little use (IIRC it dates back to when shmem_getpage() returned the page
      unlocked): kill it and do the necessary in shmem_file_read_iter().
      
      But an arm64 build then complained that info may be uninitialized (where
      shmem_getpage_gfp() deletes a freshly alloced page beyond eof), and
      advancing to an "sgp <= SGP_CACHE" test jogged it back to reality.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75edd345
    • Hugh Dickins's avatar
      mm: use __SetPageSwapBacked and dont ClearPageSwapBacked · fa9949da
      Hugh Dickins authored
      v3.16 commit 07a42788 ("mm: shmem: avoid atomic operation during
      shmem_getpage_gfp") rightly replaced one instance of SetPageSwapBacked
      by __SetPageSwapBacked, pointing out that the newly allocated page is
      not yet visible to other users (except speculative get_page_unless_zero-
      ers, who may not update page flags before their further checks).
      
      That was part of a series in which Mel was focused on tmpfs profiles:
      but almost all SetPageSwapBacked uses can be so optimized, with the same
      justification.
      
      Remove ClearPageSwapBacked from __read_swap_cache_async() error path:
      it's not an error to free a page with PG_swapbacked set.
      
      Follow a convention of __SetPageLocked, __SetPageSwapBacked instead of
      doing it differently in different places; but that's for tidiness - if
      the ordering actually mattered, we should not be using the __variants.
      
      There's probably scope for further __SetPageFlags in other places, but
      SwapBacked is the one I'm interested in at the moment.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Reviewed-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fa9949da
    • Hugh Dickins's avatar
      mm: update_lru_size do the __mod_zone_page_state · 9d5e6a9f
      Hugh Dickins authored
      Konstantin Khlebnikov pointed out (nearly four years ago, when lumpy
      reclaim was removed) that lru_size can be updated by -nr_taken once per
      call to isolate_lru_pages(), instead of page by page.
      
      Update it inside isolate_lru_pages(), or at its two callsites? I chose
      to update it at the callsites, rearranging and grouping the updates by
      nr_taken and nr_scanned together in both.
      
      With one exception, mem_cgroup_update_lru_size(,lru,) is then used where
      __mod_zone_page_state(,NR_LRU_BASE+lru,) is used; and we shall be adding
      some more calls in a future commit.  Make the code a little smaller and
      simpler by incorporating stat update in lru_size update.
      
      The exception was move_active_pages_to_lru(), which aggregated the
      pgmoved stat update separately from the individual lru_size updates; but
      I still think this a simplification worth making.
      
      However, the __mod_zone_page_state is not peculiar to mem_cgroups: so
      better use the name update_lru_size, calls mem_cgroup_update_lru_size
      when CONFIG_MEMCG.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9d5e6a9f
    • Hugh Dickins's avatar
      mm: update_lru_size warn and reset bad lru_size · ca707239
      Hugh Dickins authored
      Though debug kernels have a VM_BUG_ON to help protect from misaccounting
      lru_size, non-debug kernels are liable to wrap it around: and then the
      vast unsigned long size draws page reclaim into a loop of repeatedly
      doing nothing on an empty list, without even a cond_resched().
      
      That soft lockup looks confusingly like an over-busy reclaim scenario,
      with lots of contention on the lru_lock in shrink_inactive_list(): yet
      has a totally different origin.
      
      Help differentiate with a custom warning in
      mem_cgroup_update_lru_size(), even in non-debug kernels; and reset the
      size to avoid the lockup.  But the particular bug which suggested this
      change was mine alone, and since fixed.
      
      Make it a WARN_ONCE: the first occurrence is the most informative, a
      flurry may follow, yet even when rate-limited little more is learnt.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca707239
    • Konstantin Khlebnikov's avatar
    • Joonsoo Kim's avatar
      mm/vmstat: make node_page_state() handles all zones by itself · e87d59f7
      Joonsoo Kim authored
      node_page_state() manually adds statistics per each zone and returns
      total value for all zones.  Whenever we add a new zone, we need to
      consider this function and it's really troublesome.  Make it handle all
      zones by itself.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e87d59f7
    • Joonsoo Kim's avatar
      mm/highmem: make nr_free_highpages() handles all highmem zones by itself · 33499bfe
      Joonsoo Kim authored
      nr_free_highpages() manually adds statistics per each highmem zone and
      returns a total value for them.  Whenever we add a new highmem zone, we
      need to consider this function and it's really troublesome.  Make it
      handle all highmem zones by itself.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      33499bfe
    • Joonsoo Kim's avatar
      mm/page_alloc: correct highmem memory statistics · fc2bd799
      Joonsoo Kim authored
      ZONE_MOVABLE could be treated as highmem so we need to consider it for
      accurate statistics.  And, in following patches, ZONE_CMA will be
      introduced and it can be treated as highmem, too.  So, instead of
      manually adding stat of ZONE_MOVABLE, looping all zones and check
      whether the zone is highmem or not and add stat of the zone which can be
      treated as highmem.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fc2bd799
    • Joonsoo Kim's avatar
      mm/writeback: correct dirty page calculation for highmem · 09b4ab3c
      Joonsoo Kim authored
      ZONE_MOVABLE could be treated as highmem so we need to consider it for
      accurate calculation of dirty pages.  And, in following patches,
      ZONE_CMA will be introduced and it can be treated as highmem, too.  So,
      instead of manually adding stat of ZONE_MOVABLE, looping all zones and
      check whether the zone is highmem or not and add stat of the zone which
      can be treated as highmem.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      09b4ab3c
    • Joonsoo Kim's avatar
      power: add zone range overlapping check · ba6b0979
      Joonsoo Kim authored
      There is a system thats node's pfns are overlapped as follows:
      
        -----pfn-------->
        N0 N1 N2 N0 N1 N2
      
      Therefore, we need to care this overlapping when iterating pfn range.
      
      mark_free_pages() iterates requested zone's pfn range and unset all
      range's bitmap first.  And then it marks freepages in a zone to the
      bitmap.  If there is an overlapping zone, above unset could clear
      previous marked bit and reference to this bitmap in the future will
      cause the problem.  To prevent it, this patch adds a zone check in
      mark_free_pages().
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ba6b0979
    • Joonsoo Kim's avatar
      mm/page_owner: add zone range overlapping check · 9d43f5ae
      Joonsoo Kim authored
      There is a system thats node's pfns are overlapped as follows:
      
        -----pfn-------->
        N0 N1 N2 N0 N1 N2
      
      Therefore, we need to care this overlapping when iterating pfn range.
      
      There are one place in page_owner.c that iterates pfn range and it
      doesn't consider this overlapping.  Add it.
      
      Without this patch, above system could over count early allocated page
      number before page_owner is activated.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9d43f5ae
    • Joonsoo Kim's avatar
      mm/vmstat: add zone range overlapping check · a91c43c7
      Joonsoo Kim authored
      There is a system thats node's pfns are overlapped as follows:
      
        -----pfn-------->
        N0 N1 N2 N0 N1 N2
      
      Therefore, we need to care this overlapping when iterating pfn range.
      
      There are two places in vmstat.c that iterates pfn range and they don't
      consider this overlapping.  Add it.
      
      Without this patch, above system could over count pageblock number on a
      zone.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a91c43c7
    • Joonsoo Kim's avatar
      mm/memory_hotplug: add comment to some functions related to memory hotplug · b9eb6319
      Joonsoo Kim authored
      __offline_isolated_pages() and test_pages_isolated() are used by memory
      hotplug.  These functions require that range is in a single zone but
      there is no code to do this because memory hotplug checks it before
      calling these functions.  To avoid confusing future user of these
      functions, this patch adds comments to them.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b9eb6319
    • Joonsoo Kim's avatar
      mm/hugetlb: add same zone check in pfn_range_valid_gigantic() · f44b2dda
      Joonsoo Kim authored
      This patchset deals with some problematic sites that iterate pfn ranges.
      
      There is a system thats node's pfns are overlapped as follows:
      
        -----pfn-------->
        N0 N1 N2 N0 N1 N2
      
      Therefore, we need to take care of this overlapping when iterating pfn
      range.
      
      I audit many iterating sites that uses pfn_valid(), pfn_valid_within(),
      zone_start_pfn and etc.  and others looks safe to me.  This is a
      preparation step for a new CMA implementation, ZONE_CMA
      (https://lkml.org/lkml/2015/2/12/95), because it would be easily
      overlapped with other zones.  But, zone overlap check is also needed for
      the general case so I send it separately.
      
      This patch (of 5):
      
      alloc_gigantic_page() uses alloc_contig_range() and this requires that
      the requested range is in a single zone.  To satisfy this requirement,
      add this check to pfn_range_valid_gigantic().
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f44b2dda
    • Andrew Morton's avatar
      mm: uninline page_mapped() · 1aa8aea5
      Andrew Morton authored
      It's huge.  Uninlining it saves 206 bytes per callsite.  Shaves 4924
      bytes from the x86_64 allmodconfig vmlinux.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1aa8aea5
    • Chanho Min's avatar
      mm/highmem: simplify is_highmem() · 29f9cb53
      Chanho Min authored
      is_highmem() can be simplified by use of is_highmem_idx().  This patch
      removes redundant code and will make it easier to maintain if the zone
      policy is changed or a new zone is added.
      
      (akpm: saves me 25 bytes of text per is_highmem() callsite)
      Signed-off-by: default avatarChanho Min <chanho.min@lge.com>
      Reviewed-by: default avatarDan Williams <dan.j.williams@intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      29f9cb53
    • Vlastimil Babka's avatar
      mm, compaction: skip blocks where isolation fails in async direct compaction · fdd048e1
      Vlastimil Babka authored
      The goal of direct compaction is to quickly make a high-order page
      available for the pending allocation.  Within an aligned block of pages
      of desired order, a single allocated page that cannot be isolated for
      migration means that the block cannot fully merge to a buddy page that
      would satisfy the allocation request.  Therefore we can reduce the
      allocation stall by skipping the rest of the block immediately on
      isolation failure.  For async compaction, this also means a higher
      chance of succeeding until it detects contention.
      
      We however shouldn't completely sacrifice the second objective of
      compaction, which is to reduce overal long-term memory fragmentation.
      As a compromise, perform the eager skipping only in direct async
      compaction, while sync compaction (including kcompactd) remains
      thorough.
      
      Testing was done using stress-highalloc from mmtests, configured for
      order-4 GFP_KERNEL allocations:
      
                                       4.6-rc1               4.6-rc1
                                        before                 after
        Success 1 Min         24.00 (  0.00%)       27.00 (-12.50%)
        Success 1 Mean        30.20 (  0.00%)       31.60 ( -4.64%)
        Success 1 Max         37.00 (  0.00%)       35.00 (  5.41%)
        Success 2 Min         42.00 (  0.00%)       32.00 ( 23.81%)
        Success 2 Mean        44.00 (  0.00%)       44.80 ( -1.82%)
        Success 2 Max         48.00 (  0.00%)       52.00 ( -8.33%)
        Success 3 Min         91.00 (  0.00%)       92.00 ( -1.10%)
        Success 3 Mean        92.20 (  0.00%)       92.80 ( -0.65%)
        Success 3 Max         94.00 (  0.00%)       93.00 (  1.06%)
      
      We can see that success rates are unaffected by the skipping.
      
                      4.6-rc1     4.6-rc1
                       before       after
        User         2587.42     2566.53
        System        482.89      471.20
        Elapsed      1395.68     1382.00
      
      Times are not so useful metric for this benchmark as main portion is the
      interfering kernel builds, but results do hint at reduced system times.
      
                                            4.6-rc1     4.6-rc1
                                             before       after
        Direct pages scanned                163614      159608
        Kswapd pages scanned               2070139     2078790
        Kswapd pages reclaimed             2061707     2069757
        Direct pages reclaimed              163354      159505
      
      Reduced direct reclaim was unintended, but could be explained by more
      successful first attempt at (async) direct compaction, which is
      attempted before the first reclaim attempt in __alloc_pages_slowpath().
      
        Compaction stalls                    33052       39853
        Compaction success                   12121       19773
        Compaction failures                  20931       20079
      
      Compaction is indeed more successful, and thus less likely to get
      deferred, so there are also more direct compaction stalls.
      
        Page migrate success               3781876     3326819
        Page migrate failure                 45817       41774
        Compaction pages isolated          7868232     6941457
        Compaction migrate scanned       168160492   127269354
        Compaction migrate prescanned            0           0
        Compaction free scanned         2522142582  2326342620
        Compaction free direct alloc             0           0
        Compaction free dir. all. miss           0           0
        Compaction cost                       5252        4476
      
      The patch reduces migration scanned pages by 25% thanks to the eager
      skipping.
      
      [hughd@google.com: prevent nr_isolated_* from going negative]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fdd048e1
    • Vlastimil Babka's avatar
      mm, compaction: reduce spurious pcplist drains · a34753d2
      Vlastimil Babka authored
      Compaction drains the local pcplists each time migration scanner moves
      away from a cc->order aligned block where it isolated pages for
      migration, so that the pages freed by migrations can merge into higher
      orders.
      
      The detection is currently coarser than it could be.  The
      cc->last_migrated_pfn variable should track the lowest pfn that was
      isolated for migration.  But it is set to the pfn where
      isolate_migratepages_block() starts scanning, which is typically the
      first pfn of the pageblock.  There, the scanner might fail to isolate
      several order-aligned blocks, and then isolate COMPACT_CLUSTER_MAX in
      another block.  This would cause the pcplists drain to be performed,
      although the scanner didn't yet finish the block where it isolated from.
      
      This patch thus makes cc->last_migrated_pfn handling more accurate by
      setting it to the pfn of an actually isolated page in
      isolate_migratepages_block().  Although practical effects of this patch
      are likely low, it arguably makes the intent of the code more obvious.
      Also the next patch will make async direct compaction skip blocks more
      aggressively, and draining pcplists due to skipped blocks is wasteful.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a34753d2
    • Vlastimil Babka's avatar
      mm, compaction: wrap calculating first and last pfn of pageblock · 06b6640a
      Vlastimil Babka authored
      Compaction code has accumulated numerous instances of manual
      calculations of the first (inclusive) and last (exclusive) pfn of a
      pageblock (or a smaller block of given order), given a pfn within the
      pageblock.
      
      Wrap these calculations by introducing pageblock_start_pfn(pfn) and
      pageblock_end_pfn(pfn) macros.
      
      [vbabka@suse.cz: fix crash in get_pfnblock_flags_mask() from isolate_freepages():]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      06b6640a
    • Konstantin Khlebnikov's avatar
      mm/rmap: replace BUG_ON(anon_vma->degree) with VM_WARN_ON · e4c5800a
      Konstantin Khlebnikov authored
      This check effectively catches anon vma hierarchy inconsistence and some
      vma corruptions.  It was effective for catching corner cases in anon vma
      reusing logic.  For now this code seems stable so check could be hidden
      under CONFIG_DEBUG_VM and replaced with WARN because it's not so fatal.
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Suggested-by: default avatarVasily Averin <vvs@virtuozzo.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e4c5800a
    • Andrew Morton's avatar
      mm/mempolicy.c:offset_il_node() document and clarify · fee83b3a
      Andrew Morton authored
      This code was pretty obscure and was relying upon obscure side-effects
      of next_node(-1, ...) and was relying upon NUMA_NO_NODE being equal to
      -1.
      
      Clean that all up and document the function's intent.
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fee83b3a
    • Andrew Morton's avatar
      mm/hugetlb.c: use first_memory_node · 54f18d35
      Andrew Morton authored
      Instead of open-coding it.
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      54f18d35
    • Li Zhang's avatar
      mm/page_alloc: Remove useless parameter of __free_pages_boot_core · 949698a3
      Li Zhang authored
      __free_pages_boot_core has parameter pfn which is not used at all.
      Remove it.
      Signed-off-by: default avatarLi Zhang <zhlcindy@linux.vnet.ibm.com>
      Reviewed-by: default avatarPan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      949698a3
    • Michal Hocko's avatar
      mm/memcontrol.c:mem_cgroup_select_victim_node(): clarify comment · fda3d69b
      Michal Hocko authored
      > The comment seems to have not much to do with the code?
      
      I guess the comment tries to say that the code path is triggered when we
      charge the page which happens _before_ it is added to the LRU list and
      so last_scanned_node might contain the stale data.
      
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fda3d69b
    • Yaowei Bai's avatar
      mm/mempolicy.c: vma_migratable() can return bool · 4ee815be
      Yaowei Bai authored
      Make vma_migratable() return bool due to this particular function only
      using either one or zero as its return value.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4ee815be
    • Yaowei Bai's avatar
      mm/vmalloc.c: is_vmalloc_addr() can return bool · bb00a789
      Yaowei Bai authored
      Make is_vmalloc_addr() return bool to improve readability due to this
      particular function only using either one or zero as its return value.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bb00a789
    • Yaowei Bai's avatar
      mm/memory_hotplug: is_mem_section_removable() can return bool · c98940f6
      Yaowei Bai authored
      Make is_mem_section_removable() return bool to improve readability due
      to this particular function only using either one or zero as its return
      value.
      Signed-off-by: default avatarYaowei Bai <baiyaowei@cmss.chinamobile.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c98940f6